From: Steven Rostedt Try to take the lock again as soon as we go into the rt_spin_lock_slowlock() code before doing the setup and wait loop. This makes the code closer to what the rt_mutex_slowlock() does, which will help in simplifying this code in later commits. Signed-off-by: Steven Rostedt --- kernel/rtmutex.c | 5 +++++ 1 files changed, 5 insertions(+), 0 deletions(-) diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c index f0ce334..318d7ed 100644 --- a/kernel/rtmutex.c +++ b/kernel/rtmutex.c @@ -829,6 +829,11 @@ rt_spin_lock_slowlock(struct rt_mutex *lock) raw_spin_lock_irqsave(&lock->wait_lock, flags); init_lists(lock); + if (do_try_to_take_rt_mutex(lock, STEAL_LATERAL)) { + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); + return; + } + BUG_ON(rt_mutex_owner(lock) == current); /* -- 1.7.2.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/