[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160921171357.1c01d481@thinkpad>
Date: Wed, 21 Sep 2016 17:13:57 +0200
From: Gerald Schaefer <gerald.schaefer@...ibm.com>
To: Rui Teng <rui.teng@...ux.vnet.ibm.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Hillf Danton <hillf.zj@...baba-inc.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Michal Hocko <mhocko@...e.cz>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Vlastimil Babka <vbabka@...e.cz>,
Mike Kravetz <mike.kravetz@...cle.com>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
Martin Schwidefsky <schwidefsky@...ibm.com>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Dave Hansen <dave.hansen@...ux.intel.com>
Subject: Re: [PATCH v2 1/1] mm/hugetlb: fix memory offline with hugepage
size > memory block size
On Wed, 21 Sep 2016 21:17:29 +0800
Rui Teng <rui.teng@...ux.vnet.ibm.com> wrote:
> > /*
> > * Dissolve free hugepages in a given pfn range. Used by memory hotplug to
> > * make specified memory blocks removable from the system.
> > - * Note that start_pfn should aligned with (minimum) hugepage size.
> > + * Note that this will dissolve a free gigantic hugepage completely, if any
> > + * part of it lies within the given range.
> > */
> > void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
> > {
> > @@ -1466,9 +1473,9 @@ void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
> > if (!hugepages_supported())
> > return;
> >
> > - VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << minimum_order));
> > for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order)
> > - dissolve_free_huge_page(pfn_to_page(pfn));
> > + if (pfn_to_page(pfn)))
> > + pfn_to_page(pfn));
> How many times will dissolve_free_huge_page() be invoked in this loop?
> For each pfn, it will be converted to the head page, and then the list
> will be deleted repeatedly.
In the case where the memory block [start_pfn, end_pfn] is part of a
gigantic hugepage, dissolve_free_huge_page() will only be invoked once.
If there is only one gigantic hugepage pool, 1 << minimum_order will be
larger than the memory block size, and the loop will stop after the first
invocation of dissolve_free_huge_page().
If there are additional hugepage pools, with hugepage sizes < memory
block size, then it will loop as many times as 1 << minimum_order fits
inside a memory block, e.g. 256 times with 1 MB minimum hugepage size
and 256 MB memory block size.
However, the PageHuge() check should always return false after the first
invocation of dissolve_free_huge_page(), since update_and_free_page()
will take care of resetting compound_dtor, and so there will also be
just one invocation.
The only case where there will be more than one invocation is the case
where we do not have any part of a gigantic hugepage inside the memory
block, but rather multiple "normal sized" hugepages. Then there will be
one invocation per hugepage, as opposed to one invocation per
"1 << minimum_order" range as it was before the patch. So it also
improves the behaviour in the case where there is no gigantic page
involved.
> > }
> >
> > /*
> >
>
Powered by blists - more mailing lists