[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080227234317.GM28483@v2.random>
Date: Thu, 28 Feb 2008 00:43:17 +0100
From: Andrea Arcangeli <andrea@...ranet.com>
To: Christoph Lameter <clameter@....com>
Cc: Nick Piggin <npiggin@...e.de>,
Steve Wise <swise@...ngridcomputing.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>, linux-mm@...ck.org,
Kanoj Sarcar <kanojsarcar@...oo.com>,
Roland Dreier <rdreier@...co.com>,
Jack Steiner <steiner@....com>, linux-kernel@...r.kernel.org,
Avi Kivity <avi@...ranet.com>, kvm-devel@...ts.sourceforge.net,
daniel.blueman@...drics.com, Robin Holt <holt@....com>,
general@...ts.openfabrics.org, akpm@...ux-foundation.org
Subject: Re: [kvm-devel] [PATCH] mmu notifiers #v7
On Wed, Feb 27, 2008 at 03:06:10PM -0800, Christoph Lameter wrote:
> Ok so it somehow works slowly with GRU and you are happy with it. What
As far as GRU is concerned, performance is the same as with your patch
(Jack can confirm).
> about the RDMA folks etc etc?
If RDMA/IB folks needed to block in invalidate_range, I guess they
need to do so on top of tmpfs too, and that never worked with your
patch anyway.
> Would it not be better to have a solution that fits all instead of hacking
> something in now and then having to modify it later?
The whole point is that your solution fits only GRU and KVM too.
XPMEM in your patch works in a hacked mode limited to anonymous memory
only, Robin already received incoming mail asking to allow xpmem to
work on more than anonymous memory, so your solution-that-fits-all
doesn't actually fit some of Robin's customer needs. So if it doesn't
even entirely satisfy xpmem users, imagine the other potential
blocking-users of this code.
> Hmmm.. There were earlier discussions of changing the anon vma lock to a
> rw lock because of contention issues in large systems. Maybe we can just
> generally switch the locks taken while walking rmaps to semaphores? That
> would still require to put the invalidate outside of the pte lock.
anon_vma lock can remain a spinlock unless you also want to schedule
inside try_to_unmap.
If converting the i_mmap_lock to a mutex is a big trouble, another way
that might work to allow invalidate_range to block, would be to try to
boost the mm_users to prevent the mmu_notifier_release to run in
another cpu the moment after i_mmap_lock spinlock is unlocked. But
even if that works, it'll run slower and the mmu notifiers RCU locking
should be switched to a mutex, so it'd be nice to have it as a
separate option.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists