lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 2 Feb 2014 13:58:17 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Jason Low <jason.low2@...com>
Cc:	mingo@...hat.com, peterz@...radead.org, Waiman.Long@...com,
	torvalds@...ux-foundation.org, tglx@...utronix.de,
	linux-kernel@...r.kernel.org, riel@...hat.com,
	akpm@...ux-foundation.org, davidlohr@...com, hpa@...or.com,
	andi@...stfloor.org, aswin@...com, scott.norton@...com,
	chegu_vinod@...com
Subject: Re: [PATCH v2 2/5] mutex: Modify the way optimistic spinners are
 queued

On Tue, Jan 28, 2014 at 02:10:41PM -0800, Jason Low wrote:
> On Tue, 2014-01-28 at 12:23 -0800, Paul E. McKenney wrote:
> > On Tue, Jan 28, 2014 at 11:13:13AM -0800, Jason Low wrote:
> > >  		/*
> > >  		 * The cpu_relax() call is a compiler barrier which forces
> > > @@ -514,6 +511,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
> > >  		 */
> > >  		arch_mutex_cpu_relax();
> > >  	}
> > > +	mspin_unlock(MLOCK(lock), &node);
> > >  slowpath:
> > 
> > Are there any remaining goto statements to slowpath?  If so, they need
> > to release the lock.  If not, this label should be removed.
> 
> Yes, if the mutex_can_spin_on_owner() returns false, then the thread
> goes to directly slowpath, bypassing the optimistic spinning loop. In
> that case, the thread avoids acquiring the MCS lock, and doesn't unlock
> the MCS lock.

Got it, apologies for my confusion!

							Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists