[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1254344964-8124-2-git-send-email-hannes@cmpxchg.org>
Date: Wed, 30 Sep 2009 23:09:23 +0200
From: Johannes Weiner <hannes@...xchg.org>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Hugh Dickins <hugh.dickins@...cali.co.uk>,
Mel Gorman <mel@....ul.ie>,
Lee Schermerhorn <Lee.Schermerhorn@...com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: [rfc patch 2/3] mm: serialize truncation unmap against try_to_unmap()
To munlock private COW pages on truncating unmap, we must serialize
against concurrent reclaimers doing try_to_unmap() so they don't
re-mlock the page before we free it.
Grabbing the page lock is not possible when zapping the page table
entries, so prevent lazy mlock in the reclaimer by holding onto the
anon_vma lock while unmapping a VMA.
The anon_vma can show up only after we tried locking it. Pass it down
in zap_details so that the zapping loops can check for whether we
acquired the lock or not.
Signed-off-by: Johannes Weiner <hannes@...xchg.org>
Cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Cc: Hugh Dickins <hugh.dickins@...cali.co.uk>
Cc: Mel Gorman <mel@....ul.ie>
Cc: Lee Schermerhorn <Lee.Schermerhorn@...com>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>
---
include/linux/mm.h | 1 +
mm/memory.c | 11 +++++++++--
2 files changed, 10 insertions(+), 2 deletions(-)
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -999,6 +999,7 @@ unsigned long unmap_vmas(struct mmu_gath
int tlb_start_valid = 0;
unsigned long start = start_addr;
spinlock_t *i_mmap_lock = details? details->i_mmap_lock: NULL;
+ struct anon_vma *anon_vma = details? details->anon_vma: NULL;
int fullmm = (*tlbp)->fullmm;
struct mm_struct *mm = vma->vm_mm;
@@ -1056,8 +1057,9 @@ unsigned long unmap_vmas(struct mmu_gath
tlb_finish_mmu(*tlbp, tlb_start, start);
if (need_resched() ||
- (i_mmap_lock && spin_needbreak(i_mmap_lock))) {
- if (i_mmap_lock) {
+ (i_mmap_lock && spin_needbreak(i_mmap_lock)) ||
+ (anon_vma && spin_needbreak(&anon_vma->lock))) {
+ if (i_mmap_lock || anon_vma) {
*tlbp = NULL;
goto out;
}
@@ -2327,9 +2329,14 @@ again:
}
}
+ details->anon_vma = vma->anon_vma;
+ if (details->anon_vma)
+ spin_lock(&details->anon_vma->lock);
restart_addr = zap_page_range(vma, start_addr,
end_addr - start_addr, details);
need_break = need_resched() || spin_needbreak(details->i_mmap_lock);
+ if (details->anon_vma)
+ spin_unlock(&details->anon_vma->lock);
if (restart_addr >= end_addr) {
/* We have now completed this vma: mark it so */
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -733,6 +733,7 @@ extern void user_shm_unlock(size_t, stru
struct zap_details {
struct vm_area_struct *nonlinear_vma; /* Check page->index if set */
struct address_space *mapping; /* Backing address space */
+ struct anon_vma *anon_vma; /* Rmap for private COW pages */
bool keep_private; /* Do not touch private pages */
pgoff_t first_index; /* Lowest page->index to unmap */
pgoff_t last_index; /* Highest page->index to unmap */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists