lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1453458019.9727.8.camel@j-VirtualBox>
Date:	Fri, 22 Jan 2016 02:20:19 -0800
From:	Jason Low <jason.low2@...com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Waiman Long <waiman.long@....com>,
	Ding Tianhong <dingtianhong@...wei.com>,
	Ingo Molnar <mingo@...hat.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Davidlohr Bueso <dave@...olabs.net>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	"Paul E. McKenney" <paulmck@...ibm.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Will Deacon <Will.Deacon@....com>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	Waiman Long <Waiman.Long@...com>, jason.low2@...com
Subject: Re: [PATCH RFC] locking/mutexes: don't spin on owner when wait list
 is not NULL.

On Fri, 2016-01-22 at 09:54 +0100, Peter Zijlstra wrote:
> On Thu, Jan 21, 2016 at 06:02:34PM -0500, Waiman Long wrote:
> > This patch attempts to fix this live-lock condition by enabling the
> > a woken task in the wait list to enter optimistic spinning loop itself
> > with precedence over the ones in the OSQ. This should prevent the
> > live-lock
> > condition from happening.
> 
> 
> So I think having the top waiter going back in to contend on the OSQ is
> an excellent idea, but I'm not sure the wlh_spinning thing is important.
> 
> The OSQ itself is FIFO fair, and the waiters retain the wait_list
> position. So having the top wait_list entry contending on the OSQ
> ensures we cannot starve (I think).

Right, and we can also avoid needing to add that extra field to the
mutex structure. Before calling optimistic spinning, we do want to check
if the lock is available to avoid unnecessary OSQ overhead though.

So maybe the following would be sufficient:

---
 kernel/locking/mutex.c |    7 +++++++
 1 files changed, 7 insertions(+), 0 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 0551c21..ead0bd1 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -543,6 +543,8 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 	lock_contended(&lock->dep_map, ip);
 
 	for (;;) {
+		bool acquired = false;
+
 		/*
 		 * Lets try to take the lock again - this is needed even if
 		 * we get here for the first time (shortly after failing to
@@ -577,7 +579,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		/* didn't get the lock, go to sleep: */
 		spin_unlock_mutex(&lock->wait_lock, flags);
 		schedule_preempt_disabled();
+
+		if (mutex_is_locked(lock))
+			acquired = mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx);
 		spin_lock_mutex(&lock->wait_lock, flags);
+		if (acquired)
+			break;
 	}
 	__set_task_state(task, TASK_RUNNING);
 
-- 
1.7.2.5




Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ