lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1391708691.3971.73.camel@j-VirtualBox>
Date:	Thu, 06 Feb 2014 09:44:51 -0800
From:	Jason Low <jason.low2@...com>
To:	Waiman Long <waiman.long@...com>
Cc:	Peter Zijlstra <peterz@...radead.org>, mingo@...hat.com,
	paulmck@...ux.vnet.ibm.com, torvalds@...ux-foundation.org,
	tglx@...utronix.de, linux-kernel@...r.kernel.org, riel@...hat.com,
	akpm@...ux-foundation.org, davidlohr@...com, hpa@...or.com,
	andi@...stfloor.org, aswin@...com, scott.norton@...com,
	chegu_vinod@...com
Subject: Re: [RFC][PATCH v2 5/5] mutex: Give spinners a chance to
 spin_on_owner if need_resched() triggered while queued

On Wed, 2014-02-05 at 16:44 -0500, Waiman Long wrote:
> On 01/29/2014 06:51 AM, Peter Zijlstra wrote:
> > On Tue, Jan 28, 2014 at 02:51:35PM -0800, Jason Low wrote:
> >>> But urgh, nasty problem. Lemme ponder this a bit.
> > OK, please have a very careful look at the below. It survived a boot
> > with udev -- which usually stresses mutex contention enough to explode
> > (in fact it did a few time when I got the contention/cancel path wrong),
> > however I have not ran anything else on it.
> >
> > The below is an MCS variant that allows relatively cheap unqueueing. But
> > its somewhat tricky and I might have gotten a case wrong, esp. the
> > double concurrent cancel case got my head hurting (I didn't attempt a
> > tripple unqueue).
> >
> > Applies to tip/master but does generate a few (harmless) compile
> > warnings because I didn't fully clean up the mcs_spinlock vs m_spinlock
> > thing.
> >
> > Also, there's a comment in the slowpath that bears consideration.
> >
> >
> 
> I have an alternative way of breaking out of the MCS lock waiting queue 
> when need_resched() is set. I overload the locked flag to indicate a 
> skipped node if negative. I run the patch through the AIM7 high-systime 
> workload on a 4-socket server and it seemed to run fine.
> 
> Please check the following POC patch to see if you have any comment.

So one of the concerns I had with the approach of skipping nodes was
that, under heavy contention, we potentially could cause optimistic
spinning to be disabled on CPUs for a while since the nodes can't be
used until they have been released. One advantage of the unqueuing
method would be that nodes are usable after the spinners exit the MCS
queue and go to sleep.

Jason

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ