lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Thu, 03 Jul 2014 13:08:42 -0700
From:	Davidlohr Bueso <davidlohr@...com>
To:	Jason Low <jason.low2@...com>
Cc:	Dave Chinner <david@...morbit.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org
Subject: Re: [regression, 3.16-rc] rwsem: optimistic spinning causing
 performance degradation

Adding lkml.

On Thu, 2014-07-03 at 12:37 -0700, Davidlohr Bueso wrote:
> On Thu, 2014-07-03 at 11:50 -0700, Jason Low wrote:
> > On Wed, Jul 2, 2014 at 7:32 PM, Dave Chinner <david@...morbit.com> wrote:
> > > This is what the kernel profile looks like on the strided run:
> > >
> > > -  83.06%  [kernel]  [k] osq_lock
> > >    - osq_lock
> > >       - 100.00% rwsem_down_write_failed
> > >          - call_rwsem_down_write_failed
> > >             - 99.55% sys_mprotect
> > >                  tracesys
> > >                  __GI___mprotect
> > > -  12.02%  [kernel]  [k] rwsem_down_write_failed
> > 
> > Hi Dave,
> > 
> > So with no sign of rwsem_spin_on_owner(), yet with such heavy contention in
> > osq_lock, this makes me wonder if it's spending most of its time spinning
> > on !owner while a reader has the lock? (We don't set sem->owner for the readers.)
> 
> That would explain the long hold times with the memory allocation
> patterns between read and write locking described by Dave.
> 
> > If that's an issue, maybe the below is worth a test, in which we'll just
> > avoid spinning if rwsem_can_spin_on_owner() finds that there is no owner.
> > If we just had to enter the slowpath yet there is no owner, we'll be conservative
> > and assume readers have the lock.
> 
> I do worry a bit about the effects here when this is not an issue.
> Workloads that have smaller hold times could very well take a
> performance hit by blocking right away instead of wasting a few extra
> cycles just spinning.
> 
> > (David, you've tested something like this in the original patch with AIM7 and still
> > got the big performance boosts right?)
> 
> I have not, but will. I wouldn't mind sacrificing a bit of the great
> performance numbers we're getting on workloads that mostly take the lock
> for writing, if it means not being so devastating for when readers are
> in the picture. This is a major difference with mutexes wrt optimistic
> spinning.
> 
> Thanks,
> Davidlohr


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ