[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1496338747-20398-10-git-send-email-longman@redhat.com>
Date: Thu, 1 Jun 2017 13:39:07 -0400
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>
Cc: linux-kernel@...r.kernel.org, x86@...nel.org,
linux-alpha@...r.kernel.org, linux-ia64@...r.kernel.org,
linux-s390@...r.kernel.org, linux-arch@...r.kernel.org,
Davidlohr Bueso <dave@...olabs.net>,
Dave Chinner <david@...morbit.com>,
Waiman Long <longman@...hat.com>
Subject: [PATCH v5 9/9] locking/rwsem: Enable reader lock stealing
The rwsem has supported writer lock stealing for a long time. Reader
lock stealing isn't allowed as it may lead to writer lock starvation.
As a result, writers are preferred over readers. However, preferring
readers generally leads to better overall performance.
This patch now enables reader lock stealing on a rwsem as long as
the lock is reader-owned and optimistic spinning hasn't been disabled
because of long writer wait. This will improve overall performance
without running the risk of writer lock starvation.
Signed-off-by: Waiman Long <longman@...hat.com>
---
kernel/locking/rwsem-xadd.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index a571bec..f5caba8 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -529,6 +529,14 @@ struct rw_semaphore __sched *rwsem_down_read_failed(struct rw_semaphore *sem)
goto enqueue;
/*
+ * Steal the lock if no writer was present and the optimistic
+ * spinning disable bit isn't set.
+ */
+ count = atomic_long_read(&sem->count);
+ if (!count_has_writer(count))
+ return sem;
+
+ /*
* Undo read bias from down_read operation to stop active locking if:
* 1) Optimistic spinners are present;
* 2) the wait_lock isn't free; or
--
1.8.3.1
Powered by blists - more mailing lists