[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0804021551500.32273@schroedinger.engr.sgi.com>
Date: Wed, 2 Apr 2008 16:04:42 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Andrea Arcangeli <andrea@...ranet.com>
cc: Hugh Dickins <hugh@...itas.com>, Robin Holt <holt@....com>,
Avi Kivity <avi@...ranet.com>, Izik Eidus <izike@...ranet.com>,
kvm-devel@...ts.sourceforge.net,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
general@...ts.openfabrics.org,
Steve Wise <swise@...ngridcomputing.com>,
Roland Dreier <rdreier@...co.com>,
Kanoj Sarcar <kanojsarcar@...oo.com>, steiner@....com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
daniel.blueman@...drics.com, Nick Piggin <npiggin@...e.de>
Subject: Re: [patch 1/9] EMM Notifier: The notifier calls
On Thu, 3 Apr 2008, Andrea Arcangeli wrote:
> I said try_to_unmap_cluster, not get_user_pages.
>
> CPU0 CPU1
> try_to_unmap_cluster:
> emm_invalidate_start in EMM (or mmu_notifier_invalidate_range_start in #v10)
> walking the list by hand in EMM (or with hlist cleaner in #v10)
> xpmem method invoked
> schedule for a long while inside invalidate_range_start while skbs are sent
> gru registers
> synchronize_rcu (sorry useless now)
All of this would be much easier if you could stop the drivel. The sync
rcu was for an earlier release of the mmu notifier. Why the sniping?
> single threaded, so taking a page fault
> secondary tlb instantiated
The driver must not allow faults to occur between start and end. The
trouble here is that GRU and xpmem are mixed. If CPU0 would have been
running GRU instead of XPMEM then the fault would not have occurred
because the gru would have noticed that a range op is active. If both
systems would have run xpmem then the same would have worked.
I guess this means that an address space cannot reliably registered to
multiple subsystems if some of those do not take a refcount. If all
drivers would be required to take a refcount then this would also not
occur.
> In general my #v10 solution mixing seqlock + rcu looks more robust and
> allows multithreaded attachment of mmu notifers as well. I could have
Well its easy to say that if no one else has looked at it yet. I expressed
some concerns in reply to your post of #v10.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists