[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080201120955.GX7185@v2.random>
Date: Fri, 1 Feb 2008 13:09:55 +0100
From: Andrea Arcangeli <andrea@...ranet.com>
To: Christoph Lameter <clameter@....com>
Cc: Robin Holt <holt@....com>, Avi Kivity <avi@...ranet.com>,
Izik Eidus <izike@...ranet.com>,
kvm-devel@...ts.sourceforge.net,
Peter Zijlstra <a.p.zijlstra@...llo.nl>, steiner@....com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
daniel.blueman@...drics.com
Subject: Re: [PATCH] mmu notifiers #v5
On Thu, Jan 31, 2008 at 05:44:24PM -0800, Christoph Lameter wrote:
> The trouble is that the invalidates are much more expensive if you have to
> send theses to remote partitions (XPmem). And its really great if you can
> simple tear down everything. Certainly this is a significant improvement
> over the earlier approach but you still have the invalidate_page calls in
> ptep_clear_flush. So they fire needlessly?
Dunno, they certainly fire more frequently than yours, even _pages
fires more frequently than range_start,end but don't forget why!
That's because I've a different spinlock for every 512
ptes/4k-grub-tlbs that are being invalidated... So it pays off in
scalability. I'm unsure if gru could play tricks with your patch, to
still allow faults to still happen in parallel if they're on virtual
addresses not in the same 2M naturally aligned chunk.
> Serializing access in the device driver makes sense and comes with
> additional possiblity of not having to increment page counts all the time.
> So you trade one cacheline dirtying for many that are necessary if you
> always increment the page count.
Note that my #v5 doesn't require to increase the page count all the
time, so GRU will work fine with #v5.
See this comment in my patch:
/*
* invalidate_page[s] is called in atomic context
* after any pte has been updated and before
* dropping the PT lock required to update any Linux pte.
* Once the PT lock will be released the pte will have its
* final value to export through the secondary MMU.
* Before this is invoked any secondary MMU is still ok
* to read/write to the page previously pointed by the
* Linux pte because the old page hasn't been freed yet.
* If required set_page_dirty has to be called internally
* to this method.
*/
invalidate_page[s] is always called before the page is freed. This
will require modifications to the tlb flushing code logic to take
advantage of _pages in certain places. For now it's just safe.
> How does KVM insure the consistency of the shadow page tables? Atomic ops?
A per-VM mmu_lock spinlock is taken to serialize the access, plus
atomic ops for the cpu.
> The GRU has no page table on its own. It populates TLB entries on demand
> using the linux page table. There is no way it can figure out when to
> drop page counts again. The invalidate calls are turned directly into tlb
> flushes.
Yes, this is why it can't serialize follow_page with only the PT lock
with your patch. KVM may do it once you add start,end to range_end
only thanks to the additional pin on the page.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists