lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160810000706.GA28043@hori1.linux.bs1.fc.nec.co.jp>
Date:	Wed, 10 Aug 2016 00:07:06 +0000
From:	Naoya Horiguchi <n-horiguchi@...jp.nec.com>
To:	zhong jiang <zhongjiang@...wei.com>
CC:	Mike Kravetz <mike.kravetz@...cle.com>,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: fix the incorrect hugepages count

On Tue, Aug 09, 2016 at 06:32:39PM +0800, zhong jiang wrote:
> On 2016/8/9 1:14, Mike Kravetz wrote:
> > On 08/07/2016 07:49 PM, zhongjiang wrote:
> >> From: zhong jiang <zhongjiang@...wei.com>
> >>
> >> when memory hotplug enable, free hugepages will be freed if movable node offline.
> >> therefore, /proc/sys/vm/nr_hugepages will be incorrect.

This sounds a bit odd to me because /proc/sys/vm/nr_hugepages returns
h->nr_huge_pages or h->nr_huge_pages_node[nid], which is already
considered in dissolve_free_huge_page (via update_and_free_page).

I think that h->max_huge_pages effectively means the pool size, and
h->nr_huge_pages means total hugepage number (which can be greater than
the pool size when there's overcommiting/surplus.)

dissolve_free_huge_page intends to break a hugepage into buddy, and
the destination hugepage is supposed to be allocated from the pool of
the destination node, so the system-wide pool size is reduced.
So adding h->max_huge_pages-- makes sense to me.

Acked-by: Naoya Horiguchi <n-horiguchi@...jp.nec.com>

> >>
> >> The patch fix it by reduce the max_huge_pages when the node offline.
> >>
> >> Signed-off-by: zhong jiang <zhongjiang@...wei.com>
> >> ---
> >>  mm/hugetlb.c | 1 +
> >>  1 file changed, 1 insertion(+)
> >>
> >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> >> index f904246..3356e3a 100644
> >> --- a/mm/hugetlb.c
> >> +++ b/mm/hugetlb.c
> >> @@ -1448,6 +1448,7 @@ static void dissolve_free_huge_page(struct page *page)
> >>  		list_del(&page->lru);
> >>  		h->free_huge_pages--;
> >>  		h->free_huge_pages_node[nid]--;
> >> +		h->max_huge_pages--;
> >>  		update_and_free_page(h, page);
> >>  	}
> >>  	spin_unlock(&hugetlb_lock);
> >>
> > Adding Naoya as he was the original author of this code.
> >
> > >From quick look it appears that the huge page will be migrated (allocated
> > on another node).  If my understanding is correct, then max_huge_pages
> > should not be adjusted here.
> >
>   we need to take free hugetlb pages into account.  of course, the allocated huge pages is no
>   need to reduce.  The patch just reduce the free hugetlb pages count.

I

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ