[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151124050323.GA31053@hori1.linux.bs1.fc.nec.co.jp>
Date: Tue, 24 Nov 2015 05:03:34 +0000
From: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: Hillf Danton <hillf.zj@...baba-inc.com>,
"'David Rientjes'" <rientjes@...gle.com>,
"'Dave Hansen'" <dave.hansen@...el.com>,
"'Mel Gorman'" <mgorman@...e.de>,
"'Joonsoo Kim'" <iamjoonsoo.kim@....com>,
"'Mike Kravetz'" <mike.kravetz@...cle.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"'Naoya Horiguchi'" <nao.horiguchi@...il.com>
Subject: Re: [PATCH v1] mm: hugetlb: fix hugepage memory leak caused by
wrong reserve count
On Fri, Nov 20, 2015 at 02:26:38PM -0800, Andrew Morton wrote:
> On Fri, 20 Nov 2015 15:57:21 +0800 "Hillf Danton" <hillf.zj@...baba-inc.com> wrote:
>
> > >
> > > When dequeue_huge_page_vma() in alloc_huge_page() fails, we fall back to
> > > alloc_buddy_huge_page() to directly create a hugepage from the buddy allocator.
> > > In that case, however, if alloc_buddy_huge_page() succeeds we don't decrement
> > > h->resv_huge_pages, which means that successful hugetlb_fault() returns without
> > > releasing the reserve count. As a result, subsequent hugetlb_fault() might fail
> > > despite that there are still free hugepages.
> > >
> > > This patch simply adds decrementing code on that code path.
> > >
> > > I reproduced this problem when testing v4.3 kernel in the following situation:
> > > - the test machine/VM is a NUMA system,
> > > - hugepage overcommiting is enabled,
> > > - most of hugepages are allocated and there's only one free hugepage
> > > which is on node 0 (for example),
> > > - another program, which calls set_mempolicy(MPOL_BIND) to bind itself to
> > > node 1, tries to allocate a hugepage,
> > > - the allocation should fail but the reserve count is still hold.
> > >
> > > Signed-off-by: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
> > > Cc: <stable@...r.kernel.org> [3.16+]
> > > ---
> > > - the reason why I set stable target to "3.16+" is that this patch can be
> > > applied easily/automatically on these versions. But this bug seems to be
> > > old one, so if you are interested in backporting to older kernels,
> > > please let me know.
> > > ---
> > > mm/hugetlb.c | 5 ++++-
> > > 1 files changed, 4 insertions(+), 1 deletions(-)
> > >
> > > diff --git v4.3/mm/hugetlb.c v4.3_patched/mm/hugetlb.c
> > > index 9cc7734..77c518c 100644
> > > --- v4.3/mm/hugetlb.c
> > > +++ v4.3_patched/mm/hugetlb.c
> > > @@ -1790,7 +1790,10 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
> > > page = alloc_buddy_huge_page(h, NUMA_NO_NODE);
> > > if (!page)
> > > goto out_uncharge_cgroup;
> > > -
> > > + if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
> > > + SetPagePrivate(page);
> > > + h->resv_huge_pages--;
> > > + }
> >
> > I am wondering if this patch was prepared against the next tree.
>
> It's against 4.3.
Hi Hillf, Andrew,
That's right, this was against 4.3, and I agree with the adjustment
for next as done below.
> Here's the version I have, against current -linus:
>
> --- a/mm/hugetlb.c~mm-hugetlb-fix-hugepage-memory-leak-caused-by-wrong-reserve-count
> +++ a/mm/hugetlb.c
> @@ -1886,7 +1886,10 @@ struct page *alloc_huge_page(struct vm_a
> page = __alloc_buddy_huge_page_with_mpol(h, vma, addr);
> if (!page)
> goto out_uncharge_cgroup;
> -
> + if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
> + SetPagePrivate(page);
> + h->resv_huge_pages--;
> + }
> spin_lock(&hugetlb_lock);
> list_move(&page->lru, &h->hugepage_activelist);
> /* Fall through */
>
> It needs a careful re-review and, preferably, retest please.
I retested and made sure that the fix works on next-20151123.
Thanks,
Naoya Horiguchi
> Probably when Greg comes to merge this he'll hit problems and we'll
> need to provide him with the against-4.3 patch.
> --
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists