[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-81b5598665a24083dd889fbd8cb08b0d8de4b8ad@git.kernel.org>
Date: Mon, 23 Nov 2015 08:27:13 -0800
From: tip-bot for Waiman Long <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: tglx@...utronix.de, scott.norton@....com, Waiman.Long@....com,
mingo@...nel.org, linux-kernel@...r.kernel.org,
paulmck@...ux.vnet.ibm.com, torvalds@...ux-foundation.org,
akpm@...ux-foundation.org, peterz@...radead.org,
doug.hatch@....com, hpa@...or.com, dave@...olabs.net
Subject: [tip:locking/core] locking/qspinlock:
Prefetch the next node cacheline
Commit-ID: 81b5598665a24083dd889fbd8cb08b0d8de4b8ad
Gitweb: http://git.kernel.org/tip/81b5598665a24083dd889fbd8cb08b0d8de4b8ad
Author: Waiman Long <Waiman.Long@....com>
AuthorDate: Mon, 9 Nov 2015 19:09:22 -0500
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Mon, 23 Nov 2015 10:01:59 +0100
locking/qspinlock: Prefetch the next node cacheline
A queue head CPU, after acquiring the lock, will have to notify
the next CPU in the wait queue that it has became the new queue
head. This involves loading a new cacheline from the MCS node of the
next CPU. That operation can be expensive and add to the latency of
locking operation.
This patch addes code to optmistically prefetch the next MCS node
cacheline if the next pointer is defined and it has been spinning
for the MCS lock for a while. This reduces the locking latency and
improves the system throughput.
The performance change will depend on whether the prefetch overhead
can be hidden within the latency of the lock spin loop. On really
short critical section, there may not be performance gain at all. With
longer critical section, however, it was found to have a performance
boost of 5-10% over a range of different queue depths with a spinlock
loop microbenchmark.
Signed-off-by: Waiman Long <Waiman.Long@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Davidlohr Bueso <dave@...olabs.net>
Cc: Douglas Hatch <doug.hatch@....com>
Cc: H. Peter Anvin <hpa@...or.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Scott J Norton <scott.norton@....com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: http://lkml.kernel.org/r/1447114167-47185-3-git-send-email-Waiman.Long@hpe.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/locking/qspinlock.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 7868418..365b203 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -407,6 +407,16 @@ queue:
pv_wait_node(node);
arch_mcs_spin_lock_contended(&node->locked);
+
+ /*
+ * While waiting for the MCS lock, the next pointer may have
+ * been set by another lock waiter. We optimistically load
+ * the next pointer & prefetch the cacheline for writing
+ * to reduce latency in the upcoming MCS unlock operation.
+ */
+ next = READ_ONCE(node->next);
+ if (next)
+ prefetchw(next);
}
/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists