lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 16 Jan 2014 13:05:59 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Jason Low <jason.low2@...com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ingo Molnar <mingo@...hat.com>,
	Paul McKenney <paulmck@...ux.vnet.ibm.com>,
	Waiman Long <Waiman.Long@...com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Rik van Riel <riel@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Davidlohr Bueso <davidlohr@...com>,
	Peter Anvin <hpa@...or.com>,
	"Chandramouleeswaran, Aswin" <aswin@...com>,
	"Norton, Scott J" <scott.norton@...com>
Subject: Re: [RFC 3/3] mutex: When there is no owner, stop spinning after too
 many tries

On Wed, Jan 15, 2014 at 10:46:17PM -0800, Jason Low wrote:
> On Thu, 2014-01-16 at 10:14 +0700, Linus Torvalds wrote:
> > On Thu, Jan 16, 2014 at 9:45 AM, Jason Low <jason.low2@...com> wrote:
> > >
> > > Any comments on the below change which unlocks the mutex before taking
> > > the lock->wait_lock to wake up a waiter? Thanks.
> > 
> > Hmm. Doesn't that mean that a new lock owner can come in *before*
> > you've called debug_mutex_unlock and the lockdep stuff, and get the
> > lock? And then debug_mutex_lock() will be called *before* the unlocker
> > called debug_mutex_unlock(), which I'm sure confuses things.
> 
> If obtaining the wait_lock for debug_mutex_unlock is the issue, then
> perhaps we can address that by taking care of
> #ifdef CONFIG_DEBUG_MUTEXES. In the CONFIG_DEBUG_MUTEXES case, we can
> take the wait_lock first, and in the regular case, take the wait_lock
> after releasing the mutex.

I think we're already good for DEBUG_MUTEXES, because DEBUG_MUTEXES has
to work for archs that have !__mutex_slowpath_needs_to_unlock() and also
the DEBUG_MUTEXES code is entirely serialized on ->wait_lock.

Note that we cannot do the optimistic spinning for DEBUG_MUTEXES exactly
because of this reason.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists