[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1425226731-27724-1-git-send-email-manfred@colorfullife.com>
Date: Sun, 1 Mar 2015 17:18:51 +0100
From: Manfred Spraul <manfred@...orfullife.com>
To: Oleg Nesterov <oleg@...hat.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: LKML <linux-kernel@...r.kernel.org>, 1vier1@....de,
Peter Zijlstra <peterz@...radead.org>,
Kirill Tkhai <ktkhai@...allels.com>,
Ingo Molnar <mingo@...hat.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Manfred Spraul <manfred@...orfullife.com>,
<stable@...r.kernel.org>
Subject: [PATCH] ipc/sem.c: Update/correct memory barriers
3rd version of the patch:
sem_lock() did not properly pair memory barriers:
!spin_is_locked() and spin_unlock_wait() are both only control barriers.
The code needs an acquire barrier, otherwise the cpu might perform
read operations before the lock test.
The patch:
- defines new barriers that defaults to smp_rmb().
- converts ipc/sem.c to the new barriers.
With regards to -stable:
The change of sem_wait_array() is a bugfix, the change to sem_lock()
is a nop (just a preprocessor redefinition to improve the readability).
The bugfix is necessary for all kernels that use sem_wait_array()
(i.e.: starting from 3.10).
Signed-off-by: Manfred Spraul <manfred@...orfullife.com>
Reported-by: Oleg Nesterov <oleg@...hat.com>
Cc: <stable@...r.kernel.org>
---
include/linux/spinlock.h | 15 +++++++++++++++
ipc/sem.c | 8 ++++----
2 files changed, 19 insertions(+), 4 deletions(-)
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 3e18379..5049ff5 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -140,6 +140,21 @@ do { \
#define smp_mb__after_unlock_lock() do { } while (0)
#endif
+/*
+ * spin_unlock_wait() and !spin_is_locked() are not memory barriers, they
+ * are only control barriers, thus a memory barrier is required if the
+ * operation should act as an acquire memory barrier, i.e. if it should
+ * pair with the release memory barrier from the spin_unlock() that released
+ * the spinlock.
+ * smp_rmb() is sufficient, as writes cannot pass the implicit control barrier.
+ */
+#ifndef smp_acquire__after_spin_unlock_wait
+#define smp_acquire__after_spin_unlock_wait() smp_rmb()
+#endif
+#ifndef smp_acquire__after_spin_is_unlocked
+#define smp_acquire__after_spin_is_unlocked() smp_rmb()
+#endif
+
/**
* raw_spin_unlock_wait - wait until the spinlock gets unlocked
* @lock: the spinlock in question.
diff --git a/ipc/sem.c b/ipc/sem.c
index 9284211..d580cfa 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -275,6 +275,7 @@ static void sem_wait_array(struct sem_array *sma)
sem = sma->sem_base + i;
spin_unlock_wait(&sem->lock);
}
+ smp_acquire__after_spin_unlock_wait();
}
/*
@@ -327,13 +328,12 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
/* Then check that the global lock is free */
if (!spin_is_locked(&sma->sem_perm.lock)) {
/*
- * The ipc object lock check must be visible on all
- * cores before rechecking the complex count. Otherwise
- * we can race with another thread that does:
+ * We need a memory barrier with acquire semantics,
+ * otherwise we can race with another thread that does:
* complex_count++;
* spin_unlock(sem_perm.lock);
*/
- smp_rmb();
+ smp_acquire__after_spin_is_unlocked();
/*
* Now repeat the test of complex_count:
--
2.1.0
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists