[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180627214447.260804-1-cannonmatthews@google.com>
Date: Wed, 27 Jun 2018 14:44:47 -0700
From: Cannon Matthews <cannonmatthews@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Nadia Yvette Chambers <nyc@...omorphy.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
andreslc@...gle.com, pfeiner@...gle.com, gthelen@...gle.com,
Cannon Matthews <cannonmatthews@...gle.com>
Subject: [PATCH] mm: hugetlb: yield when prepping struct pages
When booting with very large numbers of gigantic (i.e. 1G) pages, the
operations in the loop of gather_bootmem_prealloc, and specifically
prep_compound_gigantic_page, takes a very long time, and can cause a
softlockup if enough pages are requested at boot.
For example booting with 3844 1G pages requires prepping
(set_compound_head, init the count) over 1 billion 4K tail pages, which
takes considerable time. This should also apply to reserving the same
amount of memory as 2M pages, as the same number of struct pages
are affected in either case.
Add a cond_resched() to the outer loop in gather_bootmem_prealloc() to
prevent this lockup.
Tested: Booted with softlockup_panic=1 hugepagesz=1G hugepages=3844 and
no softlockup is reported, and the hugepages are reported as
successfully setup.
Signed-off-by: Cannon Matthews <cannonmatthews@...gle.com>
---
mm/hugetlb.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a963f2034dfc..d38273c32d3b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2169,6 +2169,7 @@ static void __init gather_bootmem_prealloc(void)
*/
if (hstate_is_gigantic(h))
adjust_managed_page_count(page, 1 << h->order);
+ cond_resched();
}
}
--
2.18.0.rc2.346.g013aa6912e-goog
Powered by blists - more mailing lists