[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180628112139.GC32348@dhcp22.suse.cz>
Date: Thu, 28 Jun 2018 13:21:39 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Cannon Matthews <cannonmatthews@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Nadia Yvette Chambers <nyc@...omorphy.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
andreslc@...gle.com, pfeiner@...gle.com, gthelen@...gle.com
Subject: Re: [PATCH] mm: hugetlb: yield when prepping struct pages
On Wed 27-06-18 14:44:47, Cannon Matthews wrote:
> When booting with very large numbers of gigantic (i.e. 1G) pages, the
> operations in the loop of gather_bootmem_prealloc, and specifically
> prep_compound_gigantic_page, takes a very long time, and can cause a
> softlockup if enough pages are requested at boot.
>
> For example booting with 3844 1G pages requires prepping
> (set_compound_head, init the count) over 1 billion 4K tail pages, which
> takes considerable time. This should also apply to reserving the same
> amount of memory as 2M pages, as the same number of struct pages
> are affected in either case.
>
> Add a cond_resched() to the outer loop in gather_bootmem_prealloc() to
> prevent this lockup.
>
> Tested: Booted with softlockup_panic=1 hugepagesz=1G hugepages=3844 and
> no softlockup is reported, and the hugepages are reported as
> successfully setup.
>
> Signed-off-by: Cannon Matthews <cannonmatthews@...gle.com>
Acked-by: Michal Hocko <mhocko@...e.com>
Thanks!
> ---
> mm/hugetlb.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index a963f2034dfc..d38273c32d3b 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2169,6 +2169,7 @@ static void __init gather_bootmem_prealloc(void)
> */
> if (hstate_is_gigantic(h))
> adjust_managed_page_count(page, 1 << h->order);
> + cond_resched();
> }
> }
>
> --
> 2.18.0.rc2.346.g013aa6912e-goog
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists