lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1422562731.2418.16.camel@j-VirtualBox>
Date:	Thu, 29 Jan 2015 12:18:51 -0800
From:	Jason Low <jason.low2@...com>
To:	Davidlohr Bueso <dave@...olabs.net>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Michel Lespinasse <walken@...gle.com>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	linux-kernel@...r.kernel.org, jason.low2@...com
Subject: Re: [PATCH 4/6] locking/rwsem: Avoid deceiving lock spinners

On Thu, 2015-01-29 at 12:13 -0800, Jason Low wrote:
> On Wed, 2015-01-28 at 17:10 -0800, Davidlohr Bueso wrote:
> 
> > 	if (READ_ONCE(sem->owner))
> > 		return true; /* new owner, continue spinning */
> 
> In terms of the sem->owner check, I agree. This also reminds me of a
> similar patch I was going to send out, but for mutex. The idea is that
> before we added all the MCS logic, we wanted to return false if owner
> isn't NULL to prevent too many threads simultaneously spinning on the
> owner at the same time.
> 
> With the new MCS queuing, we don't have to worry about that issue, so it
> makes sense to continue spinning as long as owner is running and the
> spinner doesn't need to reschedule.

By the way, here was the patch that I was referring to:

-------------------------------------------------------

In the mutex_spin_on_owner(), we return true only if lock->owner == NULL.
This was beneficial in situations where there were multiple threads
simultaneously spinning for the mutex. If another thread got the lock
while other spinner(s) were also doing mutex_spin_on_owner(), then the
other spinners would stop spinning. This workaround helped reduce the
chance that many spinners were simultaneously spinning for the mutex
which can help improve performance in highly contended cases.

However, recent changes were made to the optimistic spinning code such
that instead of having all spinners simultaneously spin for the mutex,
we queue the spinners with an MCS lock such that only one thread spins
for the mutex at a time.

Now, we don't have to worry about multiple threads spinning on owner
at the same time, and if lock->owner is not NULL at this point, it likely
means another thread happen to obtain the lock in the fastpath.

This patch changes this so that mutex_spin_on_owner() return true when
the lock owner changes, which means a thread will only stop spinning
if it either needs to reschedule or if the lock owner is not running.

We saw up to a 5% performance improvement in the fserver workload with
this patch.

Signed-off-by: Jason Low <jason.low2@...com>
---
 kernel/locking/mutex.c |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 8d1e2c1..e94e6b8 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -163,11 +163,11 @@ int mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
 	rcu_read_unlock();
 
 	/*
-	 * We break out the loop above on need_resched() and when the
-	 * owner changed, which is a sign for heavy contention. Return
-	 * success only when lock->owner is NULL.
+	 * We break out the loop above on either need_resched(), when
+	 * the owner is not running, or when the lock owner changed.
+	 * Return success only when the lock owner changed.
 	 */
-	return lock->owner == NULL;
+	return lock->owner != owner;
 }
 
 /*
-- 
1.7.1



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ