[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080430061125.GF27652@wotan.suse.de>
Date: Wed, 30 Apr 2008 08:11:25 +0200
From: Nick Piggin <npiggin@...e.de>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: Mika Penttilä <mika.penttila@...umbus.fi>,
Tony Battersby <tonyb@...ernetics.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] more ZERO_PAGE handling ( was 2.6.24 regression: deadlock on coredump of big process)
On Wed, Apr 30, 2008 at 02:35:42PM +0900, KAMEZAWA Hiroyuki wrote:
> On Wed, 30 Apr 2008 07:19:32 +0200
> Nick Piggin <npiggin@...e.de> wrote:
>
> >
> > Something like this should do:
> > if (!pte_present(pte)) {
> > if (pte_none(pte)) {
> > pte_unmap_unlock
> > goto null_or_zeropage;
> > }
> > goto unlock;
> > }
> >
> Sorry for broken work and thank you for advice.
> updated.
Don't be sorry. The most important thing is that you found this tricky
problem. That's very good work IMO!
>
> Regards,
> -Kame
> ==
> follow_page() returns ZERO_PAGE if a page table is not available.
> but returns NULL if a page table exists. If NULL, handle_mm_fault()
> allocates a new page.
>
> This behavior increases page consumption at coredump, which tend
> to do read-once-but-never-written page fault. This patch is
> for avoiding this.
>
> Changelog:
> - fixed to check pte_present()/pte_none() in proper way.
>
>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
>
> Index: linux-2.6.25/mm/memory.c
> ===================================================================
> --- linux-2.6.25.orig/mm/memory.c
> +++ linux-2.6.25/mm/memory.c
> @@ -926,15 +926,15 @@ struct page *follow_page(struct vm_area_
> page = NULL;
> pgd = pgd_offset(mm, address);
> if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
> - goto no_page_table;
> + goto null_or_zeropage;
>
> pud = pud_offset(pgd, address);
> if (pud_none(*pud) || unlikely(pud_bad(*pud)))
> - goto no_page_table;
> + goto null_or_zeropage;
>
> pmd = pmd_offset(pud, address);
> if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
> - goto no_page_table;
> + goto null_or_zeropage;
>
> if (pmd_huge(*pmd)) {
> BUG_ON(flags & FOLL_GET);
> @@ -947,8 +947,13 @@ struct page *follow_page(struct vm_area_
> goto out;
>
> pte = *ptep;
> - if (!pte_present(pte))
> + if (!pte_present(pte)) {
> + if (!(flags & FOLL_WRITE) && pte_none(pte)) {
> + pte_unmap_unlock(ptep, ptl);
> + goto null_or_zeropage;
> + }
> goto unlock;
> + }
Just a small nitpick: I guess you don't need this FOLL_WRITE test because
null_or_zeropage will test FOLL_ANON which implies !FOLL_WRITE. It should give
slightly smaller code.
Otherwise, looks good to me:
Acked-by: Nick Piggin <npiggin@...e.de>
> if ((flags & FOLL_WRITE) && !pte_write(pte))
> goto unlock;
> page = vm_normal_page(vma, address, pte);
> @@ -968,7 +973,7 @@ unlock:
> out:
> return page;
>
> -no_page_table:
> +null_or_zeropage:
> /*
> * When core dumping an enormous anonymous area that nobody
> * has touched so far, we don't want to allocate page tables.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists