[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160819121328.GD10153@twins.programming.kicks-ass.net>
Date: Fri, 19 Aug 2016 14:13:28 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Jason Low <jason.low2@....com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Ding Tianhong <dingtianhong@...wei.com>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <Will.Deacon@....com>,
Ingo Molnar <mingo@...hat.com>, imre.deak@...el.com,
linux-kernel@...r.kernel.org, Waiman Long <Waiman.Long@....com>,
Davidlohr Bueso <dave@...olabs.net>,
Tim Chen <tim.c.chen@...ux.intel.com>, terry.rudd@....com,
"Paul E. McKenney" <paulmck@...ibm.com>, jason.low2@...com
Subject: Re: [PATCH v4] locking/mutex: Prevent lock starvation when spinning
is disabled
On Thu, Aug 18, 2016 at 09:11:16PM -0700, Jason Low wrote:
> 3. Only clear yield_to_waiter if the thread is the top waiter and not if it
> is a non-top waiter that received a signal.
Ah, and that is an equivalent condition to the singular one I use to
test if we need to set pending, since if we just add waiter and its the
top waiter, it must be the only waiter.
How about something like so on top?
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -74,7 +74,6 @@ EXPORT_SYMBOL(__mutex_init);
*/
__visible void __sched __mutex_lock_slowpath(atomic_t *lock_count);
-
static inline bool need_yield_to_waiter(struct mutex *lock);
/**
@@ -457,6 +456,12 @@ static bool mutex_optimistic_spin(struct
}
#endif
+static inline bool
+__mutex_waiter_is_head(struct mutex *lock, struct mutex_waiter *waiter)
+{
+ return list_first_entry(&lock->wait_list, struct mutex_waiter, list) == waiter;
+}
+
#if !defined(CONFIG_MUTEX_SPIN_ON_OWNER) && defined(CONFIG_SMP)
#define MUTEX_WAKEUP_THRESHOLD 16
@@ -471,7 +476,7 @@ static inline void clear_yield_to_waiter
struct mutex_waiter *waiter)
{
/* Only clear yield_to_waiter if we are the top waiter. */
- if (lock->wait_list.next == &waiter->list && lock->yield_to_waiter)
+ if (lock->yield_to_waiter && __mutex_waiter_is_head(lock, waiter))
lock->yield_to_waiter = false;
}
@@ -648,7 +653,7 @@ __mutex_lock_common(struct mutex *lock,
* If this is the first waiter, mark the lock as having pending
* waiters, if we happen to acquire it while doing so, yay!
*/
- if (list_is_singular(&lock->wait_list) &&
+ if (__mutex_waiter_is_head(lock, &waiter) &&
__mutex_trylock_pending(lock))
goto remove_waiter;
Powered by blists - more mailing lists