[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080225160123.11268.6904.stgit@novell1.haskins.net>
Date: Mon, 25 Feb 2008 11:01:23 -0500
From: Gregory Haskins <ghaskins@...ell.com>
To: mingo@...e.hu, a.p.zijlstra@...llo.nl, tglx@...utronix.de,
rostedt@...dmis.org, linux-rt-users@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, bill.huey@...il.com,
kevin@...man.org, cminyard@...sta.com, dsingleton@...sta.com,
dwalker@...sta.com, npiggin@...e.de, dsaxena@...xity.net,
ak@...e.de, pavel@....cz, acme@...hat.com, gregkh@...e.de,
sdietrich@...ell.com, pmorreale@...ell.com, mkohari@...ell.com,
ghaskins@...ell.com
Subject: [(RT RFC) PATCH v2 9/9] remove the extra call to try_to_take_lock
From: Peter W. Morreale <pmorreale@...ell.com>
Remove the redundant attempt to get the lock. While it is true that the
exit path with this patch adds an un-necessary xchg (in the event the
lock is granted without further traversal in the loop) experimentation
shows that we almost never encounter this situation.
Signed-off-by: Peter W. Morreale <pmorreale@...ell.com>
---
kernel/rtmutex.c | 6 ------
1 files changed, 0 insertions(+), 6 deletions(-)
diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index b81bbef..266ae31 100644
--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -756,12 +756,6 @@ rt_spin_lock_slowlock(struct rt_mutex *lock)
spin_lock_irqsave(&lock->wait_lock, flags);
init_lists(lock);
- /* Try to acquire the lock again: */
- if (try_to_take_rt_mutex(lock)) {
- spin_unlock_irqrestore(&lock->wait_lock, flags);
- return;
- }
-
BUG_ON(rt_mutex_owner(lock) == current);
/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists