[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e894f6f2-d93f-4787-af40-7f021a40b156@redhat.com>
Date: Fri, 11 Oct 2024 12:12:48 +0200
From: David Hildenbrand <david@...hat.com>
To: "Ritesh Harjani (IBM)" <ritesh.list@...il.com>,
linuxppc-dev@...ts.ozlabs.org
Cc: linux-mm@...ck.org, Sourabh Jain <sourabhjain@...ux.ibm.com>,
Hari Bathini <hbathini@...ux.ibm.com>, Zi Yan <ziy@...dia.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Mahesh J Salgaonkar <mahesh@...ux.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>,
Madhavan Srinivasan <maddy@...ux.ibm.com>,
"Aneesh Kumar K . V" <aneesh.kumar@...nel.org>,
Donet Tom <donettom@...ux.vnet.ibm.com>, LKML
<linux-kernel@...r.kernel.org>, Sachin P Bappalige <sachinpb@...ux.ibm.com>
Subject: Re: [RFC v2 1/4] cma: Enforce non-zero pageblock_order during
cma_init_reserved_mem()
On 11.10.24 09:23, Ritesh Harjani (IBM) wrote:
> cma_init_reserved_mem() checks base and size alignment with
> CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during
> early boot when pageblock_order is 0. That means if base and size does
> not have pageblock_order alignment, it can cause functional failures
> during cma activate area.
>
> So let's enforce pageblock_order to be non-zero during
> cma_init_reserved_mem().
>
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@...il.com>
> ---
> mm/cma.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/mm/cma.c b/mm/cma.c
> index 3e9724716bad..36d753e7a0bf 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
> if (!size || !memblock_is_region_reserved(base, size))
> return -EINVAL;
>
> + /*
> + * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which
> + * needs pageblock_order to be initialized. Let's enforce it.
> + */
> + if (!pageblock_order) {
> + pr_err("pageblock_order not yet initialized. Called during early boot?\n");
> + return -EINVAL;
> + }
> +
> /* ensure minimal alignment required by mm core */
> if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES))
> return -EINVAL;
Acked-by: David Hildenbrand <david@...hat.com>
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists