lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080229214800.GD8091@v2.random>
Date:	Fri, 29 Feb 2008 22:48:00 +0100
From:	Andrea Arcangeli <andrea@...ranet.com>
To:	Christoph Lameter <clameter@....com>
Cc:	Nick Piggin <nickpiggin@...oo.com.au>, akpm@...ux-foundation.org,
	Robin Holt <holt@....com>, Avi Kivity <avi@...ranet.com>,
	Izik Eidus <izike@...ranet.com>,
	kvm-devel@...ts.sourceforge.net,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	general@...ts.openfabrics.org,
	Steve Wise <swise@...ngridcomputing.com>,
	Roland Dreier <rdreier@...co.com>,
	Kanoj Sarcar <kanojsarcar@...oo.com>, steiner@....com,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	daniel.blueman@...drics.com
Subject: Re: [patch 2/6] mmu_notifier: Callbacks to invalidate address
	ranges

On Fri, Feb 29, 2008 at 01:34:34PM -0800, Christoph Lameter wrote:
> On Fri, 29 Feb 2008, Andrea Arcangeli wrote:
> 
> > On Fri, Feb 29, 2008 at 01:03:16PM -0800, Christoph Lameter wrote:
> > > That means we need both the anon_vma locks and the i_mmap_lock to become 
> > > semaphores. I think semaphores are better than mutexes. Rik and Lee saw 
> > > some performance improvements because list can be traversed in parallel 
> > > when the anon_vma lock is switched to be a rw lock.
> > 
> > The improvement was with a rw spinlock IIRC, so I don't see how it's
> > related to this.
> 
> AFAICT The rw semaphore fastpath is similar in performance to a rw 
> spinlock. 

read side is taken in the slow path.

write side is taken in the fast path.

pagefault is fast path, VM during swapping is slow path.

> > Perhaps the rwlock spinlock can be changed to a rw semaphore without
> > measurable overscheduling in the fast path. However theoretically
> 
> Overscheduling? You mean overhead?

The only possible overhead that a rw semaphore could ever generate vs
a rw lock is overscheduling.

> > speaking the rw_lock spinlock is more efficient than a rw semaphore in
> > case of a little contention during the page fault fast path because
> > the critical section is just a list_add so it'd be overkill to
> > schedule while waiting. That's why currently it's a spinlock (or rw
> > spinlock).
> 
> On the other hand a semaphore puts the process to sleep and may actually 
> improve performance because there is less time spend in a busy loop. 
> Other processes may do something useful and we stay off the contended 
> cacheline reducing traffic on the interconnect.

Yes, that's the positive side, the negative side is that you'll put
the task in uninterruptible sleep and call schedule() and require a
wakeup, because a list_add taking <1usec is running in the
other cpu. No other downside. But that's the only reason it's a
spinlock right now, infact there can't be any other reason.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ