lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160520115819.GF3193@twins.programming.kicks-ass.net>
Date:	Fri, 20 May 2016 13:58:19 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Davidlohr Bueso <dave@...olabs.net>
Cc:	manfred@...orfullife.com, Waiman.Long@....com, mingo@...nel.org,
	torvalds@...ux-foundation.org, ggherdovich@...e.com,
	mgorman@...hsingularity.net, linux-kernel@...r.kernel.org,
	Paul McKenney <paulmck@...ux.vnet.ibm.com>,
	Will Deacon <will.deacon@....com>
Subject: Re: sem_lock() vs qspinlocks

On Thu, May 19, 2016 at 10:39:26PM -0700, Davidlohr Bueso wrote:
> As such, the following restores the behavior of the ticket locks and 'fixes'
> (or hides?) the bug in sems. Naturally incorrect approach:
> 
> @@ -290,7 +290,8 @@ static void sem_wait_array(struct sem_array *sma)
> 
> 	for (i = 0; i < sma->sem_nsems; i++) {
> 		sem = sma->sem_base + i;
> -               spin_unlock_wait(&sem->lock);
> +               while (atomic_read(&sem->lock))
> +                       cpu_relax();
> 	}
> 	ipc_smp_acquire__after_spin_is_unlocked();
> }

The actual bug is clear_pending_set_locked() not having acquire
semantics. And the above 'fixes' things because it will observe the old
pending bit or the locked bit, so it doesn't matter if the store
flipping them is delayed.

The comment in queued_spin_lock_slowpath() above the smp_cond_acquire()
states that that acquire is sufficient, but this is incorrect in the
face of spin_is_locked()/spin_unlock_wait() usage only looking at the
lock byte.

The problem is that the clear_pending_set_locked() is an unordered
store, therefore this store can be delayed until no later than
spin_unlock() (which orders against it due to the address dependency).

This opens numerous races; for example:

	ipc_lock_object(&sma->sem_perm);
	sem_wait_array(sma);

				false   ->	spin_is_locked(&sma->sem_perm.lock)

is entirely possible, because sem_wait_array() consists of pure reads,
so the store can pass all that, even on x86.

The below 'hack' seems to solve the problem.

_However_ this also means the atomic_cmpxchg_relaxed() in the locked:
branch is equally wrong -- although not visible on x86. And note that
atomic_cmpxchg_acquire() would not in fact be sufficient either, since
the acquire is on the LOAD not the STORE of the LL/SC.

I need a break of sorts, because after twisting my head around the sem
code and then the qspinlock code I'm wrecked. I'll try and make a proper
patch if people can indeed confirm my thinking here.

---
 kernel/locking/qspinlock.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index ce2f75e32ae1..348e172e774f 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -366,6 +366,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	 * *,1,0 -> *,0,1
 	 */
 	clear_pending_set_locked(lock);
+	smp_mb();
 	return;
 
 	/*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ