[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1487198500.699932977@decadent.org.uk>
Date: Wed, 15 Feb 2017 22:41:40 +0000
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org,
"Martin Schwidefsky" <schwidefsky@...ibm.com>,
"Vlastimil Babka" <vbabka@...e.cz>,
"Linus Torvalds" <torvalds@...ux-foundation.org>,
"Gerald Schaefer" <gerald.schaefer@...ibm.com>,
"Heiko Carstens" <heiko.carstens@...ibm.com>,
"Dave Hansen" <dave.hansen@...ux.intel.com>,
"Mike Kravetz" <mike.kravetz@...cle.com>,
"Michal Hocko" <mhocko@...e.com>,
"Rui Teng" <rui.teng@...ux.vnet.ibm.com>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
"Naoya Horiguchi" <n-horiguchi@...jp.nec.com>
Subject: [PATCH 3.16 103/306] mm/hugetlb: fix memory offline with hugepage
size > memory block size
3.16.40-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Gerald Schaefer <gerald.schaefer@...ibm.com>
commit 2247bb335ab9c40058484cac36ea74ee652f3b7b upstream.
Patch series "mm/hugetlb: memory offline issues with hugepages", v4.
This addresses several issues with hugepages and memory offline. While
the first patch fixes a panic, and is therefore rather important, the
last patch is just a performance optimization.
The second patch fixes a theoretical issue with reserved hugepages,
while still leaving some ugly usability issue, see description.
This patch (of 3):
dissolve_free_huge_pages() will either run into the VM_BUG_ON() or a
list corruption and addressing exception when trying to set a memory
block offline that is part (but not the first part) of a "gigantic"
hugetlb page with a size > memory block size.
When no other smaller hugetlb page sizes are present, the VM_BUG_ON()
will trigger directly. In the other case we will run into an addressing
exception later, because dissolve_free_huge_page() will not work on the
head page of the compound hugetlb page which will result in a NULL
hstate from page_hstate().
To fix this, first remove the VM_BUG_ON() because it is wrong, and then
use the compound head page in dissolve_free_huge_page(). This means
that an unused pre-allocated gigantic page that has any part of itself
inside the memory block that is going offline will be dissolved
completely. Losing an unused gigantic hugepage is preferable to failing
the memory offline, for example in the situation where a (possibly
faulty) memory DIMM needs to go offline.
Fixes: c8721bbb ("mm: memory-hotplug: enable memory hotplug to handle hugepage")
Link: http://lkml.kernel.org/r/20160926172811.94033-2-gerald.schaefer@de.ibm.com
Signed-off-by: Gerald Schaefer <gerald.schaefer@...ibm.com>
Acked-by: Michal Hocko <mhocko@...e.com>
Acked-by: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
Cc: "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Vlastimil Babka <vbabka@...e.cz>
Cc: Mike Kravetz <mike.kravetz@...cle.com>
Cc: "Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@...ibm.com>
Cc: Heiko Carstens <heiko.carstens@...ibm.com>
Cc: Rui Teng <rui.teng@...ux.vnet.ibm.com>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
[bwh: Backported to 3.16: adjust context]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
mm/hugetlb.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1073,12 +1073,13 @@ static void dissolve_free_huge_page(stru
{
spin_lock(&hugetlb_lock);
if (PageHuge(page) && !page_count(page)) {
- struct hstate *h = page_hstate(page);
- int nid = page_to_nid(page);
- list_del(&page->lru);
+ struct page *head = compound_head(page);
+ struct hstate *h = page_hstate(head);
+ int nid = page_to_nid(head);
+ list_del(&head->lru);
h->free_huge_pages--;
h->free_huge_pages_node[nid]--;
- update_and_free_page(h, page);
+ update_and_free_page(h, head);
}
spin_unlock(&hugetlb_lock);
}
@@ -1086,7 +1087,8 @@ static void dissolve_free_huge_page(stru
/*
* Dissolve free hugepages in a given pfn range. Used by memory hotplug to
* make specified memory blocks removable from the system.
- * Note that start_pfn should aligned with (minimum) hugepage size.
+ * Note that this will dissolve a free gigantic hugepage completely, if any
+ * part of it lies within the given range.
*/
void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
{
@@ -1095,7 +1097,6 @@ void dissolve_free_huge_pages(unsigned l
if (!hugepages_supported())
return;
- VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << minimum_order));
for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order)
dissolve_free_huge_page(pfn_to_page(pfn));
}
Powered by blists - more mailing lists