lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 07 Aug 2014 17:45:26 -0700
From:	Davidlohr Bueso <davidlohr@...com>
To:	Waiman Long <Waiman.Long@...com>
Cc:	Ingo Molnar <mingo@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, Jason Low <jason.low2@...com>,
	Scott J Norton <scott.norton@...com>, aswin@...com
Subject: Re: [PATCH v2 1/7] locking/rwsem: check for active writer/spinner
 before wakeup

On Thu, 2014-08-07 at 18:26 -0400, Waiman Long wrote:
> On a highly contended rwsem, spinlock contention due to the slow
> rwsem_wake() call can be a significant portion of the total CPU cycles
> used. With writer lock stealing and writer optimistic spinning, there
> is also a pretty good chance that the lock may have been stolen
> before the waker wakes up the waiters. The woken tasks, if any,
> will have to go back to sleep again.

Good catch! And this applies to mutexes as well. How about something
like this:

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index dadbf88..e037588 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -707,6 +707,20 @@ EXPORT_SYMBOL_GPL(__ww_mutex_lock_interruptible);
 
 #endif
 
+#if defined(CONFIG_DEBUG_MUTEXES) || defined(CONFIG_MUTEX_SPIN_ON_OWNER)
+static inline bool mutex_has_owner(struct mutex *lock)
+{
+	struct task_struct *owner = ACCESS_ONCE(lock->owner);
+
+	return owner != NULL;
+}
+#else
+static inline bool mutex_has_owner(struct mutex *lock)
+{
+	return false;
+}
+#endif
+
 /*
  * Release the lock, slowpath:
  */
@@ -734,6 +748,15 @@ __mutex_unlock_common_slowpath(struct mutex *lock, int nested)
 	mutex_release(&lock->dep_map, nested, _RET_IP_);
 	debug_mutex_unlock(lock);
 
+	/*
+	 * Abort the wakeup operation if there is an active writer as the
+	 * lock was stolen. mutex_unlock() should have cleared the owner field
+	 * before calling this function. If that field is now set, there must
+	 * be an active writer present.
+	 */
+	if (mutex_has_owner(lock))
+		goto done;
+
 	if (!list_empty(&lock->wait_list)) {
 		/* get the first entry from the wait-list: */
 		struct mutex_waiter *waiter =
@@ -744,7 +767,7 @@ __mutex_unlock_common_slowpath(struct mutex *lock, int nested)
 
 		wake_up_process(waiter->task);
 	}
-
+done:
 	spin_unlock_mutex(&lock->wait_lock, flags);
 }
 






--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ