[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160511183328.GA10711@linux-uzut.site>
Date: Wed, 11 May 2016 11:33:28 -0700
From: Davidlohr Bueso <dave@...olabs.net>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Jason Low <jason.low2@...com>, Ingo Molnar <mingo@...hat.com>,
linux-kernel@...r.kernel.org,
Scott J Norton <scott.norton@....com>,
Waiman Long <Waiman.Long@....com>, peter@...leysoftware.com,
Jason Low <jason.low2@....com>
Subject: Re: [PATCH] locking/rwsem: Optimize write lock slowpath
On Wed, 11 May 2016, Peter Zijlstra wrote:
>On Mon, May 09, 2016 at 12:16:37PM -0700, Jason Low wrote:
>> When acquiring the rwsem write lock in the slowpath, we first try
>> to set count to RWSEM_WAITING_BIAS. When that is successful,
>> we then atomically add the RWSEM_WAITING_BIAS in cases where
>> there are other tasks on the wait list. This causes write lock
>> operations to often issue multiple atomic operations.
>>
>> We can instead make the list_is_singular() check first, and then
>> set the count accordingly, so that we issue at most 1 atomic
>> operation when acquiring the write lock and reduce unnecessary
>> cacheline contention.
>>
>> Signed-off-by: Jason Low <jason.low2@...com>
Acked-by: Davidlohr Bueso <dave@...olabs.net>
(one nit: the patch title could be more informative to what
optimization we are talking about here... ie: reduce atomic ops
in writer slowpath' or something.)
>> ---
>> kernel/locking/rwsem-xadd.c | 20 +++++++++++++-------
>> 1 file changed, 13 insertions(+), 7 deletions(-)
>>
>> diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
>> index df4dcb8..23c33e6 100644
>> --- a/kernel/locking/rwsem-xadd.c
>> +++ b/kernel/locking/rwsem-xadd.c
>> @@ -258,14 +258,20 @@ EXPORT_SYMBOL(rwsem_down_read_failed);
>> static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
>> {
>> /*
>> + * Avoid trying to acquire write lock if count isn't RWSEM_WAITING_BIAS.
>> */
>> + if (count != RWSEM_WAITING_BIAS)
>> + return false;
>> +
>> + /*
>> + * Acquire the lock by trying to set it to ACTIVE_WRITE_BIAS. If there
>> + * are other tasks on the wait list, we need to add on WAITING_BIAS.
>> + */
>> + count = list_is_singular(&sem->wait_list) ?
>> + RWSEM_ACTIVE_WRITE_BIAS :
>> + RWSEM_ACTIVE_WRITE_BIAS + RWSEM_WAITING_BIAS;
>> +
>> + if (cmpxchg_acquire(&sem->count, RWSEM_WAITING_BIAS, count) == RWSEM_WAITING_BIAS) {
>> rwsem_set_owner(sem);
>> return true;
>> }
>
>Right; so that whole thing works because we're holding sem->wait_lock.
>Should we clarify that someplace?
Yes exactly, rwsem_try_write_lock() is always called with the wait_lock held,
unlike the unqueued cousin.
Thanks,
Davidlohr
Powered by blists - more mailing lists