[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <d5aefe85d1dab1bb7e99.1288798098@v2.random>
Date: Wed, 03 Nov 2010 16:28:18 +0100
From: Andrea Arcangeli <aarcange@...hat.com>
To: linux-mm@...ck.org, Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Cc: Marcelo Tosatti <mtosatti@...hat.com>, Adam Litke <agl@...ibm.com>,
Avi Kivity <avi@...hat.com>,
Hugh Dickins <hugh.dickins@...cali.co.uk>,
Rik van Riel <riel@...hat.com>, Mel Gorman <mel@....ul.ie>,
Dave Hansen <dave@...ux.vnet.ibm.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Ingo Molnar <mingo@...e.hu>, Mike Travis <travis@....com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Christoph Lameter <cl@...ux-foundation.org>,
Chris Wright <chrisw@...s-sol.org>, bpicco@...hat.com,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Johannes Weiner <hannes@...xchg.org>,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
Chris Mason <chris.mason@...cle.com>,
Borislav Petkov <bp@...en8.de>
Subject: [PATCH 43 of 66] don't leave orhpaned swap cache after ksm merging
From: Andrea Arcangeli <aarcange@...hat.com>
When swapcache is replaced by a ksm page don't leave orhpaned swap cache.
Signed-off-by: Andrea Arcangeli <aarcange@...hat.com>
Reviewed-by: Rik van Riel <riel@...hat.com>
---
diff --git a/mm/ksm.c b/mm/ksm.c
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -800,7 +800,7 @@ static int replace_page(struct vm_area_s
set_pte_at_notify(mm, addr, ptep, mk_pte(kpage, vma->vm_page_prot));
page_remove_rmap(page);
- put_page(page);
+ free_page_and_swap_cache(page);
pte_unmap_unlock(ptep, ptl);
err = 0;
@@ -846,7 +846,18 @@ static int try_to_merge_one_page(struct
* ptes are necessarily already write-protected. But in either
* case, we need to lock and check page_count is not raised.
*/
- if (write_protect_page(vma, page, &orig_pte) == 0) {
+ err = write_protect_page(vma, page, &orig_pte);
+
+ /*
+ * After this mapping is wrprotected we don't need further
+ * checks for PageSwapCache vs page_count unlock_page(page)
+ * and we rely only on the pte_same() check run under PT lock
+ * to ensure the pte didn't change since when we wrprotected
+ * it under PG_lock.
+ */
+ unlock_page(page);
+
+ if (!err) {
if (!kpage) {
/*
* While we hold page lock, upgrade page from
@@ -855,22 +866,22 @@ static int try_to_merge_one_page(struct
*/
set_page_stable_node(page, NULL);
mark_page_accessed(page);
- err = 0;
} else if (pages_identical(page, kpage))
err = replace_page(vma, page, kpage, orig_pte);
- }
+ } else
+ err = -EFAULT;
if ((vma->vm_flags & VM_LOCKED) && kpage && !err) {
+ lock_page(page); /* for LRU manipulation */
munlock_vma_page(page);
+ unlock_page(page);
if (!PageMlocked(kpage)) {
- unlock_page(page);
lock_page(kpage);
mlock_vma_page(kpage);
- page = kpage; /* for final unlock */
+ unlock_page(kpage);
}
}
- unlock_page(page);
out:
return err;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists