lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5759B21E.2030003@intel.com>
Date:	Thu, 9 Jun 2016 11:14:54 -0700
From:	Dave Hansen <dave.hansen@...el.com>
To:	Ingo Molnar <mingo@...nel.org>, Waiman Long <waiman.long@....com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	"Chen, Tim C" <tim.c.chen@...el.com>,
	Ingo Molnar <mingo@...hat.com>,
	Davidlohr Bueso <dbueso@...e.de>,
	"Peter Zijlstra (Intel)" <peterz@...radead.org>,
	Jason Low <jason.low2@...com>,
	Michel Lespinasse <walken@...gle.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Waiman Long <waiman.long@...com>,
	Al Viro <viro@...iv.linux.org.uk>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: performance delta after VFS i_mutex=>i_rwsem conversion

On 06/09/2016 03:25 AM, Ingo Molnar wrote:
>>> That should eliminate the performance gap between mutex and rwsem wrt
>>> spinning when only writers are present. I am hoping that that patchset can
>>> be queued for 4.8.
>>
>> Yeah, so I actually had this series merged for testing last week, but a 
>> complication with a prereq patch made me unmerge it. But I have no fundamental 
>> objections, at all.
...
> Ok, these enhancements are now in the locking tree and are queued up for v4.8:
> 
>    git pull git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking/core
> 
> Dave, you might want to check your numbers with these changes: is rwsem 
> performance still significantly worse than mutex performance?

It's substantially closer than it was, but there's probably a little
work still to do.  The rwsem still looks to be sleeping a lot more than
the mutex.  Here's where we started:

	https://www.sr71.net/~dave/intel/rwsem-vs-mutex.png

The rwsem peaked lower and earlier than the mutex code.  Now, if we
compare the old (4.7-rc1) rwsem code to the newly-patched rwsem code
(from tip/locking):

> https://www.sr71.net/~dave/intel/bb.html?1=4.7.0-rc1&2=4.7.0-rc1-00127-gd4c3be7

We can see the peak is a bit higher and more importantly, it's more of a
plateau than a sharp peak.  We can also compare the new rwsem code to
the 4.5 code that had the mutex in place:

> https://www.sr71.net/~dave/intel/bb.html?1=4.5.0-rc6&2=4.7.0-rc1-00127-gd4c3be7

rwsems are still a _bit_ below the mutex code at the peak, and they also
seem to be substantially lower during the tail from 20 cpus on up.  The
rwsems are sleeping less than they were before the tip/locking updates,
but they are still idling the CPUs 90% of the time while the mutexes end
up idle 15-20% of the time when all the cpus are contending on the lock.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ