[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1407450408-11679-3-git-send-email-Waiman.Long@hp.com>
Date: Thu, 7 Aug 2014 18:26:43 -0400
From: Waiman Long <Waiman.Long@...com>
To: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, Davidlohr Bueso <davidlohr@...com>,
Jason Low <jason.low2@...com>,
Scott J Norton <scott.norton@...com>,
Waiman Long <Waiman.Long@...com>
Subject: [PATCH v2 2/7] locking/rwsem: threshold limited spinning for active readers
Even thought only the writers can perform optimistic spinning, there
is still a chance that readers may take the lock before a spinning
writer can get it. In that case, the owner field will be NULL and the
spinning writer can spin indefinitely until its time quantum expires
when some lock owning readers are not running.
This patch tries to handle this special case by doing threshold limited
spinning when the owner field is NULL. The threshold is small enough
that even if the readers are not running, it will not cause a lot of
wasted spinning cycles. With that change, the patch tries to strike a
balance between giving up too early and losing potential performance
gain and wasting too many precious CPU cycles when some lock owning
readers are not running.
Signed-off-by: Waiman Long <Waiman.Long@...com>
---
kernel/locking/rwsem-xadd.c | 30 +++++++++++++++++++++++++++++-
1 files changed, 29 insertions(+), 1 deletions(-)
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index d38cfae..ddd56d2 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -304,6 +304,14 @@ static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
/*
+ * Thresholds for optimistic spinning on readers
+ *
+ * This is the threshold for the number of spins that happens before the
+ * spinner gives up when the owner field is NULL.
+ */
+#define SPIN_READ_THRESHOLD 64
+
+/*
* Try to acquire write lock before the writer has been put on wait queue.
*/
static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem)
@@ -381,10 +389,20 @@ bool rwsem_spin_on_owner(struct rw_semaphore *sem, struct task_struct *owner)
return sem->owner == NULL;
}
+/*
+ * With active writer, spinning is done by checking if that writer is on
+ * CPU. With active readers, there is no easy way to determine if all of
+ * them are active. So it falls back to spin a certain number of times
+ * (SPIN_READ_THRESHOLD) before giving up. The threshold is relatively
+ * small with the expectation that readers are quick. For slow readers,
+ * the spinners will still fall back to sleep. On the other hand, it won't
+ * waste too many cycles when the lock owning readers are not running.
+ */
static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
{
struct task_struct *owner;
bool taken = false;
+ int spincnt = 0;
preempt_disable();
@@ -397,8 +415,18 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
while (true) {
owner = ACCESS_ONCE(sem->owner);
- if (owner && !rwsem_spin_on_owner(sem, owner))
+ if (!owner) {
+ /*
+ * Give up spinning if spincnt reaches the threshold.
+ */
+ if (spincnt++ >= SPIN_READ_THRESHOLD)
+ break;
+ } else if (!rwsem_spin_on_owner(sem, owner)) {
break;
+ } else {
+ /* Reset count when owner is defined */
+ spincnt = 0;
+ }
/* wait_lock will be acquired if write_lock is obtained */
if (rwsem_try_write_lock_unqueued(sem)) {
--
1.7.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists