[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7ae152f0-1887-4862-9173-fe2e9914dbe6@linux.ibm.com>
Date: Tue, 15 Oct 2024 19:36:12 +0530
From: Madhavan Srinivasan <maddy@...ux.ibm.com>
To: "Ritesh Harjani (IBM)" <ritesh.list@...il.com>,
linuxppc-dev@...ts.ozlabs.org
Cc: linux-mm@...ck.org, Sourabh Jain <sourabhjain@...ux.ibm.com>,
Hari Bathini <hbathini@...ux.ibm.com>, Zi Yan <ziy@...dia.com>,
David Hildenbrand <david@...hat.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Mahesh J Salgaonkar <mahesh@...ux.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>,
"Aneesh Kumar K . V" <aneesh.kumar@...nel.org>,
Donet Tom <donettom@...ux.vnet.ibm.com>,
LKML
<linux-kernel@...r.kernel.org>,
Sachin P Bappalige <sachinpb@...ux.ibm.com>
Subject: Re: [RFC v3 3/3] fadump: Move fadump_cma_init to setup_arch() after
initmem_init()
On 10/11/24 8:30 PM, Ritesh Harjani (IBM) wrote:
> During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE,
> since pageblock_order is still zero and it gets initialized
> later during initmem_init() e.g.
> setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order()
>
> One such use case where this causes issues is -
> early_setup() -> early_init_devtree() -> fadump_reserve_mem() -> fadump_cma_init()
>
> This causes CMA memory alignment check to be bypassed in
> cma_init_reserved_mem(). Then later cma_activate_area() can hit
> a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory
> area was not pageblock_order aligned.
>
> Fix it by moving the fadump_cma_init() after initmem_init(),
> where other such cma reservations also gets called.
>
> <stack trace>
> ==============
> page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010
> flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0x7ffff) CMA
> raw: 013ffff800000000 5deadbeef0000100 5deadbeef0000122 0000000000000000
> raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
> page dumped because: VM_BUG_ON_PAGE(pfn & ((1 << order) - 1))
> ------------[ cut here ]------------
> kernel BUG at mm/page_alloc.c:778!
>
> Call Trace:
> __free_one_page+0x57c/0x7b0 (unreliable)
> free_pcppages_bulk+0x1a8/0x2c8
> free_unref_page_commit+0x3d4/0x4e4
> free_unref_page+0x458/0x6d0
> init_cma_reserved_pageblock+0x114/0x198
> cma_init_reserved_areas+0x270/0x3e0
> do_one_initcall+0x80/0x2f8
> kernel_init_freeable+0x33c/0x530
> kernel_init+0x34/0x26c
> ret_from_kernel_user_thread+0x14/0x1c
>
Changes looks fine to me.
Reviewed-by: Madhavan Srinivasan <maddy@...ux.ibm.com>
> Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
> Suggested-by: David Hildenbrand <david@...hat.com>
> Reported-by: Sachin P Bappalige <sachinpb@...ux.ibm.com>
> Acked-by: Hari Bathini <hbathini@...ux.ibm.com>
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@...il.com>
> ---
> arch/powerpc/include/asm/fadump.h | 7 +++++++
> arch/powerpc/kernel/fadump.c | 6 +-----
> arch/powerpc/kernel/setup-common.c | 6 ++++--
> 3 files changed, 12 insertions(+), 7 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/fadump.h b/arch/powerpc/include/asm/fadump.h
> index ef40c9b6972a..3638f04447f5 100644
> --- a/arch/powerpc/include/asm/fadump.h
> +++ b/arch/powerpc/include/asm/fadump.h
> @@ -34,4 +34,11 @@ extern int early_init_dt_scan_fw_dump(unsigned long node, const char *uname,
> int depth, void *data);
> extern int fadump_reserve_mem(void);
> #endif
> +
> +#if defined(CONFIG_FA_DUMP) && defined(CONFIG_CMA)
> +void fadump_cma_init(void);
> +#else
> +static inline void fadump_cma_init(void) { }
> +#endif
> +
> #endif /* _ASM_POWERPC_FADUMP_H */
> diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
> index ffaec625b7a8..c42f89862893 100644
> --- a/arch/powerpc/kernel/fadump.c
> +++ b/arch/powerpc/kernel/fadump.c
> @@ -78,7 +78,7 @@ static struct cma *fadump_cma;
> * But for some reason even if it fails we still have the memory reservation
> * with us and we can still continue doing fadump.
> */
> -static void __init fadump_cma_init(void)
> +void __init fadump_cma_init(void)
> {
> unsigned long long base, size, end;
> int rc;
> @@ -139,8 +139,6 @@ static void __init fadump_cma_init(void)
> fw_dump.boot_memory_size >> 20);
> return;
> }
> -#else
> -static void __init fadump_cma_init(void) { }
> #endif /* CONFIG_CMA */
>
> /*
> @@ -642,8 +640,6 @@ int __init fadump_reserve_mem(void)
>
> pr_info("Reserved %lldMB of memory at %#016llx (System RAM: %lldMB)\n",
> (size >> 20), base, (memblock_phys_mem_size() >> 20));
> -
> - fadump_cma_init();
> }
>
> return ret;
> diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
> index 4bd2f87616ba..9f1e6f2e299e 100644
> --- a/arch/powerpc/kernel/setup-common.c
> +++ b/arch/powerpc/kernel/setup-common.c
> @@ -996,9 +996,11 @@ void __init setup_arch(char **cmdline_p)
> initmem_init();
>
> /*
> - * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
> - * be called after initmem_init(), so that pageblock_order is initialised.
> + * Reserve large chunks of memory for use by CMA for fadump, KVM and
> + * hugetlb. These must be called after initmem_init(), so that
> + * pageblock_order is initialised.
> */
> + fadump_cma_init();
> kvm_cma_reserve();
> gigantic_hugetlb_cma_reserve();
>
Powered by blists - more mailing lists