[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1457461223-4301-4-git-send-email-dave@stgolabs.net>
Date: Tue, 8 Mar 2016 10:20:23 -0800
From: Davidlohr Bueso <dave@...olabs.net>
To: tglx@...utronix.de, mingo@...nel.org
Cc: peterz@...radead.org, bigeasy@...utronix.de,
umgwanakikbuti@...il.com, paulmck@...ux.vnet.ibm.com,
dave@...olabs.net, linux-kernel@...r.kernel.org,
Davidlohr Bueso <dbueso@...e.de>
Subject: [PATCH 3/3] rtmutex: Reduce top-waiter blocking on a lock
From: Davidlohr Bueso <dbueso@...e.de>
By applying well known spin-on-lock-owner techniques, we can avoid the
blocking overhead during the process of when the task is trying to take
the rtmutex. The idea is that as long as the owner is running, there is a
fair chance it'll release the lock soon, and thus a task trying to acquire
the rtmutex will better off spinning instead of blocking immediately after
the fastpath. This is similar to what we use for other locks, borrowed
from -rt. The main difference (due to the obvious realtime constraints)
is that top-waiter spinning must account for any new higher priority waiter,
and therefore cannot steal the lock and avoid any pi-dance. As such there
will be at most only one spinner waiter upon contended lock.
Conditions to stop spinning and block are simple:
(1) Upon need_resched()
(2) Current lock owner blocks
The unlock side remains unchanged as wake_up_process can safely deal with
calls where the task is not actually blocked (TASK_NORMAL). As such, there
is only unnecessary overhead dealing with the wake_q, but this allows us not
to miss any wakeups between the spinning step and the unlocking side.
Measuring the amount of inversions of the pi_stress program, there is
a rather constant improvement in throughput during a 30 second window.
On a 32-core box, with increasing thread-group counts:
pistress
4.4.3 4.4.3
vanilla rtmutex-spinv1
Hmean 1 2321586.73 ( 0.00%) 2339847.23 ( 0.79%)
Hmean 4 8209026.49 ( 0.00%) 8597809.55 ( 4.74%)
Hmean 7 12655322.45 ( 0.00%) 13194896.45 ( 4.26%)
Hmean 12 4210477.03 ( 0.00%) 4348643.08 ( 3.28%)
Hmean 21 2996823.05 ( 0.00%) 3104513.47 ( 3.59%)
Hmean 30 2463107.53 ( 0.00%) 2584275.71 ( 4.91%)
Hmean 48 2656668.46 ( 0.00%) 2719324.53 ( 2.36%)
Hmean 64 2397253.65 ( 0.00%) 2471628.92 ( 3.10%)
Stddev 1 653473.88 ( 0.00%) 527076.59 (-19.34%)
Stddev 4 664995.50 ( 0.00%) 359487.15 (-45.94%)
Stddev 7 248476.88 ( 0.00%) 278307.31 ( 12.01%)
Stddev 12 74537.42 ( 0.00%) 54305.86 (-27.14%)
Stddev 21 72143.80 ( 0.00%) 40371.42 (-44.04%)
Stddev 30 31981.43 ( 0.00%) 42306.07 ( 32.28%)
Stddev 48 21317.95 ( 0.00%) 42608.50 ( 99.87%)
Stddev 64 23433.99 ( 0.00%) 21502.56 ( -8.24%)
Signed-off-by: Davidlohr Bueso <dbueso@...e.de>
---
kernel/Kconfig.locks | 4 +++
kernel/locking/rtmutex.c | 78 ++++++++++++++++++++++++++++++++++++-----
kernel/locking/rtmutex_common.h | 2 +-
3 files changed, 75 insertions(+), 9 deletions(-)
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index ebdb004..8602871 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -227,6 +227,10 @@ config MUTEX_SPIN_ON_OWNER
def_bool y
depends on SMP && !DEBUG_MUTEXES && ARCH_SUPPORTS_ATOMIC_RMW
+config RT_MUTEX_SPIN_ON_OWNER
+ def_bool y
+ depends on SMP && RT_MUTEXES && !DEBUG_RT_MUTEXES && ARCH_SUPPORTS_ATOMIC_RMW
+
config RWSEM_SPIN_ON_OWNER
def_bool y
depends on SMP && RWSEM_XCHGADD_ALGORITHM && ARCH_SUPPORTS_ATOMIC_RMW
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 60fa79a..8eb99b7 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -69,6 +69,48 @@ static void fixup_rt_mutex_waiters(struct rt_mutex *lock)
clear_rt_mutex_waiters(lock);
}
+#ifdef CONFIG_RT_MUTEX_SPIN_ON_OWNER
+static bool rt_mutex_spin_on_owner(struct rt_mutex *lock,
+ struct task_struct *owner)
+{
+ bool ret = true;
+
+ /*
+ * The last owner could have just released the lock,
+ * immediately try taking it again.
+ */
+ if (!owner)
+ goto done;
+
+ rcu_read_lock();
+ while (rt_mutex_owner(lock) == owner) {
+ /*
+ * Ensure we emit the owner->on_cpu, dereference _after_
+ * checking lock->owner still matches owner. If that fails,
+ * owner might point to freed memory. If it still matches,
+ * the rcu_read_lock() ensures the memory stays valid.
+ */
+ barrier();
+ if (!owner->on_cpu || need_resched()) {
+ ret = false;
+ break;
+ }
+
+ cpu_relax_lowlatency();
+ }
+ rcu_read_unlock();
+done:
+ return ret;
+}
+
+#else
+static bool rt_mutex_spin_on_owner(struct rt_mutex *lock,
+ struct task_struct *owner)
+{
+ return false;
+}
+#endif
+
/*
* We can speed up the acquire/release, if there's no debugging state to be
* set up.
@@ -130,6 +172,12 @@ static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock,
* unlock(wait_lock);
* lock(wait_lock);
* acquire(lock);
+ *
+ * Care must be taken when performing spin-on-owner techniques for
+ * the top waiter. Because this relies on lockless owner->on_cpu,
+ * it can cause nil owner pointer dereferencing. This condition is
+ * taken care of in rt_mutex_spin_on_owner() and will cause the task
+ * to immediately block.
*/
return rt_mutex_cmpxchg_release(lock, owner, NULL);
}
@@ -989,12 +1037,15 @@ static void mark_wakeup_next_waiter(struct wake_q_head *wake_q,
rt_mutex_dequeue_pi(current, waiter);
/*
- * As we are waking up the top waiter, and the waiter stays
- * queued on the lock until it gets the lock, this lock
- * obviously has waiters. Just set the bit here and this has
- * the added benefit of forcing all new tasks into the
- * slow path making sure no task of lower priority than
- * the top waiter can steal this lock.
+ * As we are potentially waking up the top waiter, and the waiter
+ * stays queued on the lock until it gets the lock, this lock
+ * obviously has waiters. Just set the bit here and this has the
+ * added benefit of forcing all new tasks into the slow path
+ * making sure no task of lower priority than the top waiter can
+ * steal this lock.
+ *
+ * If the top waiter, otoh, is spinning on ->owner, this will also
+ * serve to exit out of the loop and try to acquire the lock.
*/
lock->owner = (void *) RT_MUTEX_HAS_WAITERS;
@@ -1090,7 +1141,7 @@ void rt_mutex_adjust_pi(struct task_struct *task)
}
/**
- * __rt_mutex_slowlock() - Perform the wait-wake-try-to-take loop
+ * __rt_mutex_slowlock() - Perform the spin or wait-wake-try-to-take loop
* @lock: the rt_mutex to take
* @state: the state the task should block in (TASK_INTERRUPTIBLE
* or TASK_UNINTERRUPTIBLE)
@@ -1105,6 +1156,7 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state,
struct rt_mutex_waiter *waiter)
{
int ret = 0;
+ struct rt_mutex_waiter *top_waiter;
for (;;) {
/* Try to acquire the lock: */
@@ -1125,11 +1177,21 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state,
break;
}
+ top_waiter = rt_mutex_top_waiter(lock);
+
raw_spin_unlock_irq(&lock->wait_lock);
debug_rt_mutex_print_deadlock(waiter);
- schedule();
+ /*
+ * At this point the PI-dance is done, and, as the top waiter,
+ * we are next in line for the lock. Try to spin on the current
+ * owner for a while, in the hope that the lock will be released
+ * soon. Otherwise fallback and block.
+ */
+ if (top_waiter != waiter ||
+ !rt_mutex_spin_on_owner(lock, rt_mutex_owner(lock)))
+ schedule();
raw_spin_lock_irq(&lock->wait_lock);
set_current_state(state);
diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h
index c714737..cba4b81 100644
--- a/kernel/locking/rtmutex_common.h
+++ b/kernel/locking/rtmutex_common.h
@@ -76,7 +76,7 @@ task_top_pi_waiter(struct task_struct *p)
static inline struct task_struct *rt_mutex_owner(struct rt_mutex *lock)
{
return (struct task_struct *)
- ((unsigned long)lock->owner & ~RT_MUTEX_OWNER_MASKALL);
+ ((unsigned long)READ_ONCE(lock->owner) & ~RT_MUTEX_OWNER_MASKALL);
}
/*
--
2.1.4
Powered by blists - more mailing lists