[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1422582768.2418.31.camel@j-VirtualBox>
Date: Thu, 29 Jan 2015 17:52:48 -0800
From: Jason Low <jason.low2@...com>
To: Davidlohr Bueso <dave@...olabs.net>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Michel Lespinasse <walken@...gle.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
linux-kernel@...r.kernel.org, jason.low2@...com
Subject: Re: Refactoring mutex spin on owner code
On Thu, 2015-01-29 at 15:15 -0800, Davidlohr Bueso wrote:
> On Thu, 2015-01-29 at 12:18 -0800, Jason Low wrote:
> > /*
> > - * We break out the loop above on need_resched() and when the
> > - * owner changed, which is a sign for heavy contention. Return
> > - * success only when lock->owner is NULL.
> > + * We break out the loop above on either need_resched(), when
> > + * the owner is not running, or when the lock owner changed.
> > + * Return success only when the lock owner changed.
> > */
> > - return lock->owner == NULL;
> > + return lock->owner != owner;
> > }
>
> Ideally we would refactor all this, along with getting rid of
> owner_running() at some point. It no longer makes sense to split up
> mutex_spin_on_owner() and we're doing duplicate owner checks. It would
> also be simpler than having to guess why we broke out of the loop, for
> example.
Sure, that makes sense. What do you think of this additional change for
refactoring the mutex version?
---
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 8711505..b6a8633 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -204,44 +204,45 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock,
* Mutex spinning code migrated from kernel/sched/core.c
*/
-static inline bool owner_running(struct mutex *lock, struct task_struct *owner)
-{
- if (lock->owner != owner)
- return false;
-
- /*
- * Ensure we emit the owner->on_cpu, dereference _after_ checking
- * lock->owner still matches owner, if that fails, owner might
- * point to free()d memory, if it still matches, the rcu_read_lock()
- * ensures the memory stays valid.
- */
- barrier();
-
- return owner->on_cpu;
-}
-
/*
* Look out! "owner" is an entirely speculative pointer
* access and not reliable.
*/
static noinline
-int mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
+bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
{
+ bool ret;
+
rcu_read_lock();
- while (owner_running(lock, owner)) {
- if (need_resched())
+ while (true) {
+ /* Return success when the lock owner changed */
+ if (lock->owner != owner) {
+ ret = true;
break;
+ }
+
+ /*
+ * Ensure we emit the owner->on_cpu, dereference _after_
+ * checking lock->owner still matches owner, if that fails,
+ * owner might point to free()d memory, if it still matches,
+ * the rcu_read_lock() ensures the memory stays valid.
+ */
+ barrier();
+
+ /*
+ * Stop spinning if we need to reschedule or if owner is
+ * not running.
+ */
+ if (!owner->on_cpu || need_resched()) {
+ ret = false;
+ break;
+ }
cpu_relax_lowlatency();
}
rcu_read_unlock();
- /*
- * We break out the loop above on either need_resched(), when
- * the owner is not running, or when the lock owner changed.
- * Return success only when the lock owner changed.
- */
- return lock->owner != owner;
+ return ret;
}
/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists