lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080205234742.GI7441@v2.random>
Date:	Wed, 6 Feb 2008 00:47:42 +0100
From:	Andrea Arcangeli <andrea@...ranet.com>
To:	Christoph Lameter <clameter@....com>
Cc:	Robin Holt <holt@....com>, Avi Kivity <avi@...ranet.com>,
	Izik Eidus <izike@...ranet.com>,
	kvm-devel@...ts.sourceforge.net,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>, steiner@....com,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	daniel.blueman@...drics.com
Subject: Re: [PATCH] mmu notifiers #v5

On Tue, Feb 05, 2008 at 03:10:52PM -0800, Christoph Lameter wrote:
> On Tue, 5 Feb 2008, Andrea Arcangeli wrote:
> 
> > > You can avoid the page-pin and the pt lock completely by zapping the 
> > > mappings at _start and then holding off new references until _end.
> > 
> > "holding off new references until _end" = per-range mutex less scalar
> > and more expensive than the PT lock that has to be taken anyway.
> 
> You can of course setup a 2M granularity lock to get the same granularity 
> as the pte lock. That would even work for the cases where you have to page 
> pin now.

If you set a 2M granularity lock, the _start callback would need to
do:

	for_each_2m_lock()
		mutex_lock()

so you'd run zillon of mutex_lock in a row, you're the one with the
million of operations argument.

> The size of the mmap is relevant if you have to perform callbacks on 
> every mapped page that involved take mmu specific locks. That seems to be 
> the case with this approach.

mmap should never trigger any range_start/_end callback unless it's
overwriting an older mapping which is definitely not the interesting
workload for those apps including kvm.

> Optimizing do_exit by taking a single lock to zap all external references 
> instead of 1 mio callbacks somehow leads to slowdown?

It can if the application runs for more than a couple of seconds,
i.e. not a fork flood in which you care about do_exit speed. Keep in
mind if you had 1mio invalidate_pages callback it means you previously
called follow_page 1 mio of times too...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ