[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0802051013440.11705@schroedinger.engr.sgi.com>
Date: Tue, 5 Feb 2008 10:17:41 -0800 (PST)
From: Christoph Lameter <clameter@....com>
To: Andrea Arcangeli <andrea@...ranet.com>
cc: Robin Holt <holt@....com>, Avi Kivity <avi@...ranet.com>,
Izik Eidus <izike@...ranet.com>,
kvm-devel@...ts.sourceforge.net,
Peter Zijlstra <a.p.zijlstra@...llo.nl>, steiner@....com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
daniel.blueman@...drics.com
Subject: Re: [PATCH] mmu notifiers #v5
On Tue, 5 Feb 2008, Andrea Arcangeli wrote:
> given I never allow a coherency-loss between two threads that will
> read/write to two different physical pages for the same virtual
> adddress in remap_file_pages).
The other approach will not have any remote ptes at that point. Why would
there be a coherency issue?
> In performance terms with your patch before GRU can run follow_page it
> has to take a mm-wide global mutex where each thread in all cpus will
> have to take it. That will trash on >4-way when the tlb misses start
No. It only has to lock the affected range. Remote page faults can occur
while another part of the address space is being invalidated. The
complexity of locking is up to the user of the mmu notifier. A simple
implementation is satisfactory for the GRU right now. Should it become a
problem then the lock granularity can be refined without changing the API.
> > "conversion of some page in pages"? A proposal to defer the freeing of the
> > pages until after the pte_unlock?
>
> There can be many tricks to optimize page in pages, but again munmap
> and do_exit aren't the interesting path to optimzie, nor for GRU nor
> for KVM so it doesn't matter right now.
Still not sure what we are talking about here.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists