[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1430403123.2011.16.camel@stgolabs.net>
Date: Thu, 30 Apr 2015 07:12:03 -0700
From: Davidlohr Bueso <dave@...olabs.net>
To: Waiman Long <waiman.long@...com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org,
Jason Low <jason.low2@...com>,
Scott J Norton <scott.norton@...com>,
Douglas Hatch <doug.hatch@...com>
Subject: Re: [PATCH v3] locking/rwsem: reduce spinlock contention in wakeup
after up_read/up_write
On Wed, 2015-04-29 at 15:58 -0400, Waiman Long wrote:
> A write lock can also be acquired by a spinning writer in
> rwsem_try_write_lock_unqueued() where wait_lock isn't used. With
> multiple down_read's, it is possible that the first exiting reader wakes
> up a writer who acquires the write lock while the other readers are
> waiting for acquiring the wait_lock.
Except that readers that do the wakeup do not call __rwsem_do_wake() if
there is an active writer:
/* If there are no active locks, wake the front queued process(es).
*
* If there are no writers and we are first in the queue,
* wake our own waiter to join the existing active readers !
*/
if (count == RWSEM_WAITING_BIAS ||
(count > RWSEM_WAITING_BIAS &&
adjustment != -RWSEM_ACTIVE_READ_BIAS))
sem = __rwsem_do_wake(sem, RWSEM_WAKE_ANY);
And for the reader part, your rwsem_has_active_writer() check, while
avoiding the counter atomic update, would break current semantics in
that we still do the next reader grant -- also note the unlikely()
predictor ;) And this is done with the counter, so by using the owner,
you would have a race between the cmpxchg and rwsem_set_owner(). Yes,
its a small window (specially after commit 7a215f89), but there
nonetheless and could cause even more bogus wakeups. Again, I simply do
not like mixing these two -- we get away with it with the optimistic
spinning because we just iterate again, but this is not the case.
Ultimately, is this really an issue? Do you have numbers that could
justify such a change? I suspect all the benchmark results you posted in
the patch are from reducing the spinlock contention, not from this.
Thanks,
Davidlohr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists