[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190906142541.34061-6-alex.kogan@oracle.com>
Date: Fri, 6 Sep 2019 10:25:41 -0400
From: Alex Kogan <alex.kogan@...cle.com>
To: linux@...linux.org.uk, peterz@...radead.org, mingo@...hat.com,
will.deacon@....com, arnd@...db.de, longman@...hat.com,
linux-arch@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, tglx@...utronix.de, bp@...en8.de,
hpa@...or.com, x86@...nel.org, guohanjun@...wei.com,
jglauber@...vell.com
Cc: steven.sistare@...cle.com, daniel.m.jordan@...cle.com,
alex.kogan@...cle.com, dave.dice@...cle.com,
rahul.x.yadav@...cle.com
Subject: [PATCH v4 5/5] locking/qspinlock: Introduce the shuffle reduction optimization into CNA
This optimization reduces the probability threads will be shuffled between
the main and secondary queues when the secondary queue is empty.
It is helpful when the lock is only lightly contended.
Signed-off-by: Alex Kogan <alex.kogan@...cle.com>
Reviewed-by: Steve Sistare <steven.sistare@...cle.com>
---
kernel/locking/qspinlock_cna.h | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/kernel/locking/qspinlock_cna.h b/kernel/locking/qspinlock_cna.h
index e86182e6163b..1c3a8905b2ca 100644
--- a/kernel/locking/qspinlock_cna.h
+++ b/kernel/locking/qspinlock_cna.h
@@ -64,6 +64,15 @@ static DEFINE_PER_CPU(u32, seed);
#define INTRA_NODE_HANDOFF_PROB_ARG (16)
/*
+ * Controls the probability for enabling the scan of the main queue when
+ * the secondary queue is empty. The chosen value reduces the amount of
+ * unnecessary shuffling of threads between the two waiting queues when
+ * the contention is low, while responding fast enough and enabling
+ * the shuffling when the contention is high.
+ */
+#define SHUFFLE_REDUCTION_PROB_ARG (7)
+
+/*
* Return false with probability 1 / 2^@..._bits.
* Intuitively, the larger @num_bits the less likely false is to be returned.
* @num_bits must be a number between 0 and 31.
@@ -230,6 +239,16 @@ static inline void cna_pass_lock(struct mcs_spinlock *node,
u32 val = 1;
/*
+ * Limit thread shuffling when the secondary queue is empty.
+ * This copes with the overhead the shuffling creates when the
+ * lock is only lightly contended, and threads do not stay
+ * in the secondary queue long enough to reap the benefit of moving
+ * them there.
+ */
+ if (node->locked <= 1 && probably(SHUFFLE_REDUCTION_PROB_ARG))
+ goto pass_lock;
+
+ /*
* Try to find a successor running on the same NUMA node
* as the current lock holder. For long-term fairness,
* search for such a thread with high probability rather than always.
@@ -252,5 +271,6 @@ static inline void cna_pass_lock(struct mcs_spinlock *node,
((struct cna_node *)next_holder)->tail->mcs.next = next;
}
+pass_lock:
arch_mcs_pass_lock(&next_holder->locked, val);
}
--
2.11.0 (Apple Git-81)
Powered by blists - more mailing lists