[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080508025652.GW8276@duo.random>
Date: Thu, 8 May 2008 04:56:52 +0200
From: Andrea Arcangeli <andrea@...ranet.com>
To: Christoph Lameter <clameter@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, steiner@....com,
holt@....com, npiggin@...e.de, a.p.zijlstra@...llo.nl,
kvm-devel@...ts.sourceforge.net, kanojsarcar@...oo.com,
rdreier@...co.com, swise@...ngridcomputing.com,
linux-kernel@...r.kernel.org, avi@...ranet.com, linux-mm@...ck.org,
general@...ts.openfabrics.org, hugh@...itas.com,
rusty@...tcorp.com.au, aliguori@...ibm.com, chrisw@...hat.com,
marcelo@...ck.org, dada1@...mosbay.com, paulmck@...ibm.com
Subject: Re: [PATCH 08 of 11] anon-vma-rwsem
On Wed, May 07, 2008 at 06:12:32PM -0700, Christoph Lameter wrote:
> Andrea's mm_lock could have wider impact. It is the first effective
> way that I have seen of temporarily holding off reclaim from an address
> space. It sure is a brute force approach.
The only improvement I can imagine on mm_lock, is after changing the
name to global_mm_lock() to reestablish the signal_pending check in
the loop that takes the spinlock and to backoff and put the cap to 512
vmas so the ram wasted on anon-vmas wouldn't save more than 10-100usec
at most (plus the vfree that may be a bigger cost but we're ok to pay
it and it surely isn't security related).
Then on the long term we need to talk to Matt on returning a parameter
to the sort function to break the loop. After that we remove the 512
vma cap and mm_lock is free to run as long as it wants like
/dev/urandom, nobody can care less how long it will run before
returning as long as it reacts to signals.
This is the right way if we want to support XPMEM/GRU efficiently and
without introducing unnecessary regressions in the VM fastpaths and VM
footprint.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists