[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d513960f-59aa-496f-95fa-28a01b419fc0@oss.qualcomm.com>
Date: Fri, 2 Jan 2026 15:01:10 -0800
From: Oreoluwa Babatunde <oreoluwa.babatunde@....qualcomm.com>
To: Rob Herring <robh@...nel.org>, Marek Szyprowski <m.szyprowski@...sung.com>
Cc: ye.li@....nxp.com, kernel@....qualcomm.com, saravanak@...gle.com,
akpm@...ux-foundation.org, david@...hat.com,
lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com, vbabka@...e.cz,
rppt@...nel.org, surenb@...gle.com, mhocko@...e.com,
robin.murphy@....com, devicetree@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
iommu@...ts.linux.dev, quic_c_gdjako@...cinc.com
Subject: Re: [PATCH] of: reserved_mem: Allow reserved_mem framework detect
"cma=" kernel param
On 12/18/2025 6:42 AM, Rob Herring wrote:
> On Thu, Dec 18, 2025 at 3:55 AM Marek Szyprowski
> <m.szyprowski@...sung.com> wrote:
>>
>> On 10.12.2025 15:07, Rob Herring wrote:
>>> On Tue, Dec 9, 2025 at 6:20 PM Oreoluwa Babatunde
>>> <oreoluwa.babatunde@....qualcomm.com> wrote:
>>>> When initializing the default cma region, the "cma=" kernel parameter
>>>> takes priority over a DT defined linux,cma-default region. Hence, give
>>>> the reserved_mem framework the ability to detect this so that the DT
>>>> defined cma region can skip initialization accordingly.
>>> Please explain here why this is a new problem. Presumably the
>>> RESERVEDMEM_OF_DECLARE hook after commit xxxx gets called before the
>>> early_param hook. And why is it now earlier?
>>>
>>> I don't really like the state/ordering having to be worried about in 2 places.
>>
>> I also don't like this spaghetti, but it originates from
>> commit 8a6e02d0c00e ("of: reserved_mem: Restructure how the reserved
>> memory regions are processed") and the first fixup for it: 2c223f7239f3
>> ("of: reserved_mem: Restructure call site for
>> dma_contiguous_early_fixup()").
>
> Honestly, this code wasn't great before. Every time it is touched it
> breaks someone.
>
>> It looks that it is really hard to make reserved memory
>> initialization fully dynamic assuming that the cma related fixups have
>> to be known before populating kernel memory pages tables. I also advised
>> in
>> https://lore.kernel.org/all/be70bdc4-bddd-4afe-8574-7e0889fd381c@samsung.com/
>> to simply increase the size of the static table to make it large enough for the sane use cases, but
>> it turned out that this approach was already discussed and rejected:
>> https://lore.kernel.org/all/1650488954-26662-1-git-send-email-quic_pdaly@quicinc.com/
>
> I guess the question is what's a sane limit? After 128, are we going
> to accept 256? I really suspect we are just enabling some further
> abuse of /reserved-memory downstream. For example, I could imagine
> there's micromanaging the location of media/graphics buffers so they
> end up in specific DRAM banks to optimize accesses. No one ever wants
> to detail why they want/need more regions.
An earlier patch which requested an increase to the static size of the
reserved_mem array did include some breakdown as to why a larger size
could be needed. Eg: cma regions, dma-buf heaps, Guest VMs, hypervisors, etc.
https://lore.kernel.org/all/1650488954-26662-1-git-send-email-quic_pdaly@quicinc.com/
I also see the same problem of if we are using a static size and just increase
it to 128, what happens when someone else needs 256? This is why some form of dynamic
sizing makes sense to me.
>
>> Maybe it would make sense to revert the mentioned changes and get back
>> to such simple approach - to make the size of the static table
>> configurable in the Kconfig?
>
> I'd rather not resort to a kconfig option.
>
What issues do you see with using a Kconfig as a solution for this?
Regards,
Oreoluwa
Powered by blists - more mailing lists