[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20241112225245.52d0858536c6fb9ba4a683c0@linux-foundation.org>
Date: Tue, 12 Nov 2024 22:52:45 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Ritesh Harjani (IBM) <ritesh.list@...il.com>
Cc: linux-mm@...ck.org, linuxppc-dev@...ts.ozlabs.org, Sourabh Jain
<sourabhjain@...ux.ibm.com>, Hari Bathini <hbathini@...ux.ibm.com>, Zi Yan
<ziy@...dia.com>, David Hildenbrand <david@...hat.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>, Mahesh J
Salgaonkar <mahesh@...ux.ibm.com>, Michael Ellerman <mpe@...erman.id.au>,
Madhavan Srinivasan <maddy@...ux.ibm.com>, "Aneesh Kumar K . V"
<aneesh.kumar@...nel.org>, Donet Tom <donettom@...ux.vnet.ibm.com>, LKML
<linux-kernel@...r.kernel.org>, Sachin P Bappalige <sachinpb@...ux.ibm.com>
Subject: Re: [RFC v3 -next] cma: Enforce non-zero pageblock_order during
cma_init_reserved_mem()
On Wed, 13 Nov 2024 07:23:43 +0530 Ritesh Harjani (IBM) <ritesh.list@...il.com> wrote:
> "Ritesh Harjani (IBM)" <ritesh.list@...il.com> writes:
>
> > cma_init_reserved_mem() checks base and size alignment with
> > CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during
> > early boot when pageblock_order is 0. That means if base and size does
> > not have pageblock_order alignment, it can cause functional failures
> > during cma activate area.
> >
> > So let's enforce pageblock_order to be non-zero during
> > cma_init_reserved_mem().
> >
> > Acked-by: David Hildenbrand <david@...hat.com>
> > Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@...il.com>
> > ---
> > v2 -> v3: Separated the series into 2 as discussed in v2.
> > [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/
> >
> > mm/cma.c | 9 +++++++++
> > 1 file changed, 9 insertions(+)
>
> Gentle ping. Is this going into -next?
I pay little attention to anything marked "RFC". Let me take a look.
Powered by blists - more mailing lists