[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <573F7723.8030201@hpe.com>
Date: Fri, 20 May 2016 16:44:19 -0400
From: Waiman Long <waiman.long@....com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Davidlohr Bueso <dave@...olabs.net>, <manfred@...orfullife.com>,
<mingo@...nel.org>, <torvalds@...ux-foundation.org>,
<ggherdovich@...e.com>, <mgorman@...hsingularity.net>,
<linux-kernel@...r.kernel.org>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Will Deacon <will.deacon@....com>
Subject: Re: sem_lock() vs qspinlocks
On 05/20/2016 07:58 AM, Peter Zijlstra wrote:
> On Thu, May 19, 2016 at 10:39:26PM -0700, Davidlohr Bueso wrote:
>> As such, the following restores the behavior of the ticket locks and 'fixes'
>> (or hides?) the bug in sems. Naturally incorrect approach:
>>
>> @@ -290,7 +290,8 @@ static void sem_wait_array(struct sem_array *sma)
>>
>> for (i = 0; i< sma->sem_nsems; i++) {
>> sem = sma->sem_base + i;
>> - spin_unlock_wait(&sem->lock);
>> + while (atomic_read(&sem->lock))
>> + cpu_relax();
>> }
>> ipc_smp_acquire__after_spin_is_unlocked();
>> }
> The actual bug is clear_pending_set_locked() not having acquire
> semantics. And the above 'fixes' things because it will observe the old
> pending bit or the locked bit, so it doesn't matter if the store
> flipping them is delayed.
The clear_pending_set_locked() is not the only place where the lock is
set. If there are more than one waiter, the queuing patch will be used
instead. The set_locked(), which is also an unordered store, will then
be used to set the lock.
Cheers,
Longman
Powered by blists - more mailing lists