lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 20 Nov 2015 16:20:17 +0900
From:	Naoya Horiguchi <n-horiguchi@...jp.nec.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	David Rientjes <rientjes@...gle.com>,
	Dave Hansen <dave.hansen@...el.com>,
	Mel Gorman <mgorman@...e.de>,
	Joonsoo Kim <iamjoonsoo.kim@....com>,
	Hillf Danton <hillf.zj@...baba-inc.com>,
	Mike Kravetz <mike.kravetz@...cle.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
	Naoya Horiguchi <nao.horiguchi@...il.com>
Subject: [PATCH v1] mm: hugetlb: fix hugepage memory leak caused by wrong reserve count

When dequeue_huge_page_vma() in alloc_huge_page() fails, we fall back to
alloc_buddy_huge_page() to directly create a hugepage from the buddy allocator.
In that case, however, if alloc_buddy_huge_page() succeeds we don't decrement
h->resv_huge_pages, which means that successful hugetlb_fault() returns without
releasing the reserve count. As a result, subsequent hugetlb_fault() might fail
despite that there are still free hugepages.

This patch simply adds decrementing code on that code path.

I reproduced this problem when testing v4.3 kernel in the following situation:
- the test machine/VM is a NUMA system,
- hugepage overcommiting is enabled,
- most of hugepages are allocated and there's only one free hugepage
  which is on node 0 (for example),
- another program, which calls set_mempolicy(MPOL_BIND) to bind itself to
  node 1, tries to allocate a hugepage,
- the allocation should fail but the reserve count is still hold.

Signed-off-by: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
Cc: <stable@...r.kernel.org> [3.16+]
---
- the reason why I set stable target to "3.16+" is that this patch can be
  applied easily/automatically on these versions. But this bug seems to be
  old one, so if you are interested in backporting to older kernels,
  please let me know.
---
 mm/hugetlb.c |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git v4.3/mm/hugetlb.c v4.3_patched/mm/hugetlb.c
index 9cc7734..77c518c 100644
--- v4.3/mm/hugetlb.c
+++ v4.3_patched/mm/hugetlb.c
@@ -1790,7 +1790,10 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
 		page = alloc_buddy_huge_page(h, NUMA_NO_NODE);
 		if (!page)
 			goto out_uncharge_cgroup;
-
+		if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
+			SetPagePrivate(page);
+			h->resv_huge_pages--;
+		}
 		spin_lock(&hugetlb_lock);
 		list_move(&page->lru, &h->hugepage_activelist);
 		/* Fall through */
-- 
1.7.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ