[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091126162011.GG13095@csn.ul.ie>
Date: Thu, 26 Nov 2009 16:20:12 +0000
From: Mel Gorman <mel@....ul.ie>
To: Hugh Dickins <hugh.dickins@...cali.co.uk>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Izik Eidus <ieidus@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Chris Wright <chrisw@...hat.com>,
Rik van Riel <riel@...hat.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 1/9] ksm: fix mlockfreed to munlocked
On Tue, Nov 24, 2009 at 04:40:55PM +0000, Hugh Dickins wrote:
> When KSM merges an mlocked page, it has been forgetting to munlock it:
> that's been left to free_page_mlock(), which reports it in /proc/vmstat
> as unevictable_pgs_mlockfreed instead of unevictable_pgs_munlocked (and
> whinges "Page flag mlocked set for process" in mmotm, whereas mainline
> is silently forgiving). Call munlock_vma_page() to fix that.
>
> Signed-off-by: Hugh Dickins <hugh.dickins@...cali.co.uk>
Acked-by: Mel Gorman <mel@....ul.ie>
> ---
> Is this a fix that I ought to backport to 2.6.32? It does rely on part of
> an earlier patch (moved unlock_page down), so does not apply cleanly as is.
>
> mm/internal.h | 3 ++-
> mm/ksm.c | 4 ++++
> mm/mlock.c | 4 ++--
> 3 files changed, 8 insertions(+), 3 deletions(-)
>
> --- ksm0/mm/internal.h 2009-11-14 10:17:02.000000000 +0000
> +++ ksm1/mm/internal.h 2009-11-22 20:39:56.000000000 +0000
> @@ -105,9 +105,10 @@ static inline int is_mlocked_vma(struct
> }
>
> /*
> - * must be called with vma's mmap_sem held for read, and page locked.
> + * must be called with vma's mmap_sem held for read or write, and page locked.
> */
> extern void mlock_vma_page(struct page *page);
> +extern void munlock_vma_page(struct page *page);
>
> /*
> * Clear the page's PageMlocked(). This can be useful in a situation where
> --- ksm0/mm/ksm.c 2009-11-14 10:17:02.000000000 +0000
> +++ ksm1/mm/ksm.c 2009-11-22 20:39:56.000000000 +0000
> @@ -34,6 +34,7 @@
> #include <linux/ksm.h>
>
> #include <asm/tlbflush.h>
> +#include "internal.h"
>
> /*
> * A few notes about the KSM scanning process,
> @@ -762,6 +763,9 @@ static int try_to_merge_one_page(struct
> pages_identical(page, kpage))
> err = replace_page(vma, page, kpage, orig_pte);
>
> + if ((vma->vm_flags & VM_LOCKED) && !err)
> + munlock_vma_page(page);
> +
> unlock_page(page);
> out:
> return err;
> --- ksm0/mm/mlock.c 2009-11-14 10:17:02.000000000 +0000
> +++ ksm1/mm/mlock.c 2009-11-22 20:39:56.000000000 +0000
> @@ -99,14 +99,14 @@ void mlock_vma_page(struct page *page)
> * not get another chance to clear PageMlocked. If we successfully
> * isolate the page and try_to_munlock() detects other VM_LOCKED vmas
> * mapping the page, it will restore the PageMlocked state, unless the page
> - * is mapped in a non-linear vma. So, we go ahead and SetPageMlocked(),
> + * is mapped in a non-linear vma. So, we go ahead and ClearPageMlocked(),
> * perhaps redundantly.
> * If we lose the isolation race, and the page is mapped by other VM_LOCKED
> * vmas, we'll detect this in vmscan--via try_to_munlock() or try_to_unmap()
> * either of which will restore the PageMlocked state by calling
> * mlock_vma_page() above, if it can grab the vma's mmap sem.
> */
> -static void munlock_vma_page(struct page *page)
> +void munlock_vma_page(struct page *page)
> {
> BUG_ON(!PageLocked(page));
>
>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists