[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e921bf90-d1f5-3795-478b-4cfae9086749@oracle.com>
Date: Tue, 21 Jul 2020 10:05:21 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Stephen Rothwell <sfr@...b.auug.org.au>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Linux Next Mailing List <linux-next@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Barry Song <song.bao.hua@...ilicon.com>,
Jonathan Cameron <jonathan.cameron@...wei.com>,
Roman Gushchin <guro@...com>
Subject: Re: linux-next: build failure after merge of the akpm-current tree
On 7/21/20 3:57 AM, Stephen Rothwell wrote:
> Hi all,
>
> After merging the akpm-current tree, today's linux-next build
> (sparc64 defconfig) failed like this:
>
> mm/hugetlb.c: In function 'free_gigantic_page':
> mm/hugetlb.c:1233:18: error: 'hugetlb_cma' undeclared (first use in this function); did you mean 'hugetlb_lock'?
> cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order))
> ^~~~~~~~~~~
> hugetlb_lock
>
> Caused by commits
>
> ee0889218f26 ("mm/hugetlb: avoid hardcoding while checking if cma is enabled")
> 8729fda59982 ("mm-hugetlb-avoid-hardcoding-while-checking-if-cma-is-enabled-fix")
>
> I have added this patch for today:
>
> From: Stephen Rothwell <sfr@...b.auug.org.au>
> Date: Tue, 21 Jul 2020 20:44:57 +1000
> Subject: [PATCH] mm/hugetlb: better checks before using hugetlb_cma
>
> Signed-off-by: Stephen Rothwell <sfr@...b.auug.org.au>
Thanks Stephen, sorry for missing that in review.
Acked-by: Mike Kravetz <mike.kravetz@...cle.com>
--
Mike Kravetz
> ---
> mm/hugetlb.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 4b560c7555e7..4645f1441d32 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1229,9 +1229,10 @@ static void free_gigantic_page(struct page *page, unsigned int order)
> * If the page isn't allocated using the cma allocator,
> * cma_release() returns false.
> */
> - if (IS_ENABLED(CONFIG_CMA) &&
> - cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order))
> +#ifdef CONFIG_CMA
> + if (cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order))
> return;
> +#endif
>
> free_contig_range(page_to_pfn(page), 1 << order);
> }
> @@ -1242,7 +1243,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
> {
> unsigned long nr_pages = 1UL << huge_page_order(h);
>
> - if (IS_ENABLED(CONFIG_CMA)) {
> +#ifdef CONFIG_CMA
> + {
> struct page *page;
> int node;
>
> @@ -1256,6 +1258,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
> return page;
> }
> }
> +#endif
>
> return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask);
> }
>
Powered by blists - more mailing lists