lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 24 Nov 2015 10:16:36 -0800
From:	Mike Kravetz <mike.kravetz@...cle.com>
To:	Naoya Horiguchi <n-horiguchi@...jp.nec.com>
Cc:	Hillf Danton <hillf.zj@...baba-inc.com>,
	"'Andrew Morton'" <akpm@...ux-foundation.org>,
	"'David Rientjes'" <rientjes@...gle.com>,
	"'Dave Hansen'" <dave.hansen@...el.com>,
	"'Mel Gorman'" <mgorman@...e.de>,
	"'Joonsoo Kim'" <iamjoonsoo.kim@....com>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"'Naoya Horiguchi'" <nao.horiguchi@...il.com>
Subject: Re: [PATCH v1] mm: hugetlb: fix hugepage memory leak caused by wrong
 reserve count

On 11/23/2015 09:32 PM, Naoya Horiguchi wrote:
> On Fri, Nov 20, 2015 at 01:56:18PM -0800, Mike Kravetz wrote:
>> On 11/19/2015 11:57 PM, Hillf Danton wrote:
>>>>
>>>> When dequeue_huge_page_vma() in alloc_huge_page() fails, we fall back to
>>>> alloc_buddy_huge_page() to directly create a hugepage from the buddy allocator.
>>>> In that case, however, if alloc_buddy_huge_page() succeeds we don't decrement
>>>> h->resv_huge_pages, which means that successful hugetlb_fault() returns without
>>>> releasing the reserve count. As a result, subsequent hugetlb_fault() might fail
>>>> despite that there are still free hugepages.
>>>>
>>>> This patch simply adds decrementing code on that code path.
>>
>> In general, I agree with the patch.  If we allocate a huge page via the
>> buddy allocator and that page will be used to satisfy a reservation, then
>> we need to decrement the reservation count.
>>
>> As Hillf mentions, this code is not exactly the same in linux-next.
>> Specifically, there is the new call to take the memory policy of the
>> vma into account when calling the buddy allocator.  I do not think,
>> this impacts your proposed change but you may want to test with that
>> in place.
>>
>>>>
>>>> I reproduced this problem when testing v4.3 kernel in the following situation:
>>>> - the test machine/VM is a NUMA system,
>>>> - hugepage overcommiting is enabled,
>>>> - most of hugepages are allocated and there's only one free hugepage
>>>>   which is on node 0 (for example),
>>>> - another program, which calls set_mempolicy(MPOL_BIND) to bind itself to
>>>>   node 1, tries to allocate a hugepage,
>>
>> I am curious about this scenario.  When this second program attempts to
>> allocate the page, I assume it creates a reservation first.  Is this
>> reservation before or after setting mempolicy?  If the mempolicy was set
>> first, I would have expected the reservation to allocate a page on
>> node 1 to satisfy the reservation.
> 
> My testing called set_mempolicy() at first then called mmap(), but things
> didn't change if I reordered them, because currently hugetlb reservation is
> not NUMA-aware.

Ah right.  I was looking at gather_surplus_pages() as called by
hugetlb_acct_memory() to account for a new reservation.  In your case,
the global free count is still large enough to satisfy the reservation
so gather_surplus_pages simply increases the global reservation count.

If there were not enough free pages, alloc_buddy_huge_page() would be
called in an attempt to allocate enough free pages.  As is the case in
alloc_huge_page(), the mempolicy of the of the task would be taken into
account (if there is no vma specific policy).  So, the new huge pages to
satisfy the reservation would 'hopefully' be allocated on the correct node.

Sorry, I thinking your test might be allocating a new huge page at
reservation time.  But, it is not.
-- 
Mike Kravetz

> 
> Thanks,
> Naoya Horiguchi
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ