[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20110216001437.443368423@clark.kroah.org>
Date: Tue, 15 Feb 2011 16:13:01 -0800
From: Greg KH <gregkh@...e.de>
To: linux-kernel@...r.kernel.org, stable@...nel.org
Cc: stable-review@...nel.org, torvalds@...ux-foundation.org,
akpm@...ux-foundation.org, alan@...rguk.ukuu.org.uk,
Hugh Dickins <hughd@...gle.com>, Mel Gorman <mel@....ul.ie>,
Rik van Riel <riel@...hat.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
"Junichi Nomura" <j-nomura@...jp.nec.com>,
Andi Kleen <ak@...ux.intel.com>
Subject: [124/272] mm: fix hugepage migration
2.6.37-stable review patch. If anyone has any objections, please let us know.
------------------
From: Hugh Dickins <hughd@...gle.com>
commit fd4a4663db293bfd5dc20fb4113977f62895e550 upstream.
2.6.37 added an unmap_and_move_huge_page() for memory failure recovery,
but its anon_vma handling was still based around the 2.6.35 conventions.
Update it to use page_lock_anon_vma, get_anon_vma, page_unlock_anon_vma,
drop_anon_vma in the same way as we're now changing unmap_and_move().
I don't particularly like to propose this for stable when I've not seen
its problems in practice nor tested the solution: but it's clearly out of
synch at present.
Signed-off-by: Hugh Dickins <hughd@...gle.com>
Cc: Mel Gorman <mel@....ul.ie>
Cc: Rik van Riel <riel@...hat.com>
Cc: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
Cc: "Jun'ichi Nomura" <j-nomura@...jp.nec.com>
Cc: Andi Kleen <ak@...ux.intel.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...e.de>
---
mm/migrate.c | 23 ++++++-----------------
1 file changed, 6 insertions(+), 17 deletions(-)
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -805,7 +805,6 @@ static int unmap_and_move_huge_page(new_
int rc = 0;
int *result = NULL;
struct page *new_hpage = get_new_page(hpage, private, &result);
- int rcu_locked = 0;
struct anon_vma *anon_vma = NULL;
if (!new_hpage)
@@ -820,12 +819,10 @@ static int unmap_and_move_huge_page(new_
}
if (PageAnon(hpage)) {
- rcu_read_lock();
- rcu_locked = 1;
-
- if (page_mapped(hpage)) {
- anon_vma = page_anon_vma(hpage);
- atomic_inc(&anon_vma->external_refcount);
+ anon_vma = page_lock_anon_vma(hpage);
+ if (anon_vma) {
+ get_anon_vma(anon_vma);
+ page_unlock_anon_vma(anon_vma);
}
}
@@ -837,16 +834,8 @@ static int unmap_and_move_huge_page(new_
if (rc)
remove_migration_ptes(hpage, hpage);
- if (anon_vma && atomic_dec_and_lock(&anon_vma->external_refcount,
- &anon_vma->lock)) {
- int empty = list_empty(&anon_vma->head);
- spin_unlock(&anon_vma->lock);
- if (empty)
- anon_vma_free(anon_vma);
- }
-
- if (rcu_locked)
- rcu_read_unlock();
+ if (anon_vma)
+ drop_anon_vma(anon_vma);
out:
unlock_page(hpage);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists