[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-0e06e5be70d392aa842c1455ec2d0baf62aeed48@git.kernel.org>
Date: Mon, 6 Jul 2015 08:35:20 -0700
From: tip-bot for Waiman Long <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: will.deacon@....com, mingo@...nel.org, scott.norton@...com,
arnd@...db.de, torvalds@...ux-foundation.org, tglx@...utronix.de,
peterz@...radead.org, linux-kernel@...r.kernel.org, hpa@...or.com,
doug.hatch@...com, Waiman.Long@...com
Subject: [tip:locking/urgent] locking/qrwlock:
Better optimization for interrupt context readers
Commit-ID: 0e06e5be70d392aa842c1455ec2d0baf62aeed48
Gitweb: http://git.kernel.org/tip/0e06e5be70d392aa842c1455ec2d0baf62aeed48
Author: Waiman Long <Waiman.Long@...com>
AuthorDate: Fri, 19 Jun 2015 11:50:01 -0400
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Mon, 6 Jul 2015 14:11:28 +0200
locking/qrwlock: Better optimization for interrupt context readers
The qrwlock is fair in the process context, but becoming unfair when
in the interrupt context to support use cases like the tasklist_lock.
The current code isn't that well-documented on what happens when
in the interrupt context. The rspin_until_writer_unlock() will only
spin if the writer has gotten the lock. If the writer is still in the
waiting state, the increment in the reader count will cause the writer
to remain in the waiting state and the new interrupt context reader
will get the lock and return immediately. The current code, however,
does an additional read of the lock value which is not necessary as
the information has already been there in the fast path. This may
sometime cause an additional cacheline transfer when the lock is
highly contended.
This patch passes the lock value information gotten in the fast path
to the slow path to eliminate the additional read. It also documents
the action for the interrupt context readers more clearly.
Signed-off-by: Waiman Long <Waiman.Long@...com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Will Deacon <will.deacon@....com>
Cc: Arnd Bergmann <arnd@...db.de>
Cc: Douglas Hatch <doug.hatch@...com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Scott J Norton <scott.norton@...com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: http://lkml.kernel.org/r/1434729002-57724-3-git-send-email-Waiman.Long@hp.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
include/asm-generic/qrwlock.h | 4 ++--
kernel/locking/qrwlock.c | 13 +++++++------
2 files changed, 9 insertions(+), 8 deletions(-)
diff --git a/include/asm-generic/qrwlock.h b/include/asm-generic/qrwlock.h
index 55e3ee1..deb9e8b 100644
--- a/include/asm-generic/qrwlock.h
+++ b/include/asm-generic/qrwlock.h
@@ -36,7 +36,7 @@
/*
* External function declarations
*/
-extern void queued_read_lock_slowpath(struct qrwlock *lock);
+extern void queued_read_lock_slowpath(struct qrwlock *lock, u32 cnts);
extern void queued_write_lock_slowpath(struct qrwlock *lock);
/**
@@ -105,7 +105,7 @@ static inline void queued_read_lock(struct qrwlock *lock)
return;
/* The slowpath will decrement the reader count, if necessary. */
- queued_read_lock_slowpath(lock);
+ queued_read_lock_slowpath(lock, cnts);
}
/**
diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c
index 49057d4..d9c36c5 100644
--- a/kernel/locking/qrwlock.c
+++ b/kernel/locking/qrwlock.c
@@ -62,20 +62,21 @@ rspin_until_writer_unlock(struct qrwlock *lock, u32 cnts)
/**
* queued_read_lock_slowpath - acquire read lock of a queue rwlock
* @lock: Pointer to queue rwlock structure
+ * @cnts: Current qrwlock lock value
*/
-void queued_read_lock_slowpath(struct qrwlock *lock)
+void queued_read_lock_slowpath(struct qrwlock *lock, u32 cnts)
{
- u32 cnts;
-
/*
* Readers come here when they cannot get the lock without waiting
*/
if (unlikely(in_interrupt())) {
/*
- * Readers in interrupt context will spin until the lock is
- * available without waiting in the queue.
+ * Readers in interrupt context will get the lock immediately
+ * if the writer is just waiting (not holding the lock yet).
+ * The rspin_until_writer_unlock() function returns immediately
+ * in this case. Otherwise, they will spin until the lock
+ * is available without waiting in the queue.
*/
- cnts = smp_load_acquire((u32 *)&lock->cnts);
rspin_until_writer_unlock(lock, cnts);
return;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists