lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 06 Feb 2014 13:37:17 -0500
From:	Waiman Long <>
To:	Jason Low <>
CC:	Peter Zijlstra <>,,,,,,,,,,,,,
Subject: Re: [RFC][PATCH v2 5/5] mutex: Give spinners a chance to spin_on_owner
 if need_resched() triggered while queued

On 02/06/2014 12:44 PM, Jason Low wrote:
> On Wed, 2014-02-05 at 16:44 -0500, Waiman Long wrote:
>> On 01/29/2014 06:51 AM, Peter Zijlstra wrote:
>>> On Tue, Jan 28, 2014 at 02:51:35PM -0800, Jason Low wrote:
>>>>> But urgh, nasty problem. Lemme ponder this a bit.
>>> OK, please have a very careful look at the below. It survived a boot
>>> with udev -- which usually stresses mutex contention enough to explode
>>> (in fact it did a few time when I got the contention/cancel path wrong),
>>> however I have not ran anything else on it.
>>> The below is an MCS variant that allows relatively cheap unqueueing. But
>>> its somewhat tricky and I might have gotten a case wrong, esp. the
>>> double concurrent cancel case got my head hurting (I didn't attempt a
>>> tripple unqueue).
>>> Applies to tip/master but does generate a few (harmless) compile
>>> warnings because I didn't fully clean up the mcs_spinlock vs m_spinlock
>>> thing.
>>> Also, there's a comment in the slowpath that bears consideration.
>> I have an alternative way of breaking out of the MCS lock waiting queue
>> when need_resched() is set. I overload the locked flag to indicate a
>> skipped node if negative. I run the patch through the AIM7 high-systime
>> workload on a 4-socket server and it seemed to run fine.
>> Please check the following POC patch to see if you have any comment.
> So one of the concerns I had with the approach of skipping nodes was
> that, under heavy contention, we potentially could cause optimistic
> spinning to be disabled on CPUs for a while since the nodes can't be
> used until they have been released. One advantage of the unqueuing
> method would be that nodes are usable after the spinners exit the MCS
> queue and go to sleep.
> Jason

Under heavy contention when many threads are trying to access the 
mutexes using optimistic spinning. This patch can actually reduce the 
number of wasted CPU cycles waiting in the MCS spin loop and let the 
CPUs do other useful work. So I don't see that as a negative. I think 
this kind of self-tuning is actually good for the overall throughput of 
the system.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists