[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20150911224528.607805504@linuxfoundation.org>
Date: Fri, 11 Sep 2015 15:49:01 -0700
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Manfred Spraul <manfred@...orfullife.com>,
Oleg Nesterov <oleg@...hat.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Kirill Tkhai <ktkhai@...allels.com>,
Ingo Molnar <mingo@...hat.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Davidlohr Bueso <dave@...olabs.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH 3.10 02/11] ipc/sem.c: update/correct memory barriers
3.10-stable review patch. If anyone has any objections, please let me know.
------------------
From: Manfred Spraul <manfred@...orfullife.com>
commit 3ed1f8a99d70ea1cd1508910eb107d0edcae5009 upstream.
sem_lock() did not properly pair memory barriers:
!spin_is_locked() and spin_unlock_wait() are both only control barriers.
The code needs an acquire barrier, otherwise the cpu might perform read
operations before the lock test.
As no primitive exists inside <include/spinlock.h> and since it seems
noone wants another primitive, the code creates a local primitive within
ipc/sem.c.
With regards to -stable:
The change of sem_wait_array() is a bugfix, the change to sem_lock() is a
nop (just a preprocessor redefinition to improve the readability). The
bugfix is necessary for all kernels that use sem_wait_array() (i.e.:
starting from 3.10).
Signed-off-by: Manfred Spraul <manfred@...orfullife.com>
Reported-by: Oleg Nesterov <oleg@...hat.com>
Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Kirill Tkhai <ktkhai@...allels.com>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: Davidlohr Bueso <dave@...olabs.net>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
ipc/sem.c | 20 ++++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -253,6 +253,16 @@ static void sem_rcu_free(struct rcu_head
}
/*
+ * spin_unlock_wait() and !spin_is_locked() are not memory barriers, they
+ * are only control barriers.
+ * The code must pair with spin_unlock(&sem->lock) or
+ * spin_unlock(&sem_perm.lock), thus just the control barrier is insufficient.
+ *
+ * smp_rmb() is sufficient, as writes cannot pass the control barrier.
+ */
+#define ipc_smp_acquire__after_spin_is_unlocked() smp_rmb()
+
+/*
* Wait until all currently ongoing simple ops have completed.
* Caller must own sem_perm.lock.
* New simple ops cannot start, because simple ops first check
@@ -275,6 +285,7 @@ static void sem_wait_array(struct sem_ar
sem = sma->sem_base + i;
spin_unlock_wait(&sem->lock);
}
+ ipc_smp_acquire__after_spin_is_unlocked();
}
/*
@@ -326,8 +337,13 @@ static inline int sem_lock(struct sem_ar
/* Then check that the global lock is free */
if (!spin_is_locked(&sma->sem_perm.lock)) {
- /* spin_is_locked() is not a memory barrier */
- smp_mb();
+ /*
+ * We need a memory barrier with acquire semantics,
+ * otherwise we can race with another thread that does:
+ * complex_count++;
+ * spin_unlock(sem_perm.lock);
+ */
+ ipc_smp_acquire__after_spin_is_unlocked();
/* Now repeat the test of complex_count:
* It can't change anymore until we drop sem->lock.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists