[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <547da8d7-1967-4c56-8bc1-da22a5283b77@ti.com>
Date: Thu, 10 Jul 2025 09:46:56 -0500
From: Andrew Davis <afd@...com>
To: Maxime Ripard <mripard@...nel.org>
CC: Rob Herring <robh@...nel.org>, Saravana Kannan <saravanak@...gle.com>,
Sumit Semwal <sumit.semwal@...aro.org>,
Benjamin Gaignard
<benjamin.gaignard@...labora.com>,
Brian Starkey <Brian.Starkey@....com>,
John Stultz <jstultz@...gle.com>,
"T.J. Mercier" <tjmercier@...gle.com>,
Christian König <christian.koenig@....com>,
Krzysztof
Kozlowski <krzk+dt@...nel.org>,
Conor Dooley <conor+dt@...nel.org>,
Marek
Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>, Jared Kangas <jkangas@...hat.com>,
Mattijs Korpershoek
<mkorpershoek@...nel.org>,
<devicetree@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<linux-media@...r.kernel.org>, <dri-devel@...ts.freedesktop.org>,
<linaro-mm-sig@...ts.linaro.org>, <iommu@...ts.linux.dev>
Subject: Re: [PATCH v6 2/2] dma-buf: heaps: cma: Create CMA heap for each CMA
reserved region
On 7/10/25 2:44 AM, Maxime Ripard wrote:
> On Wed, Jul 09, 2025 at 11:14:37AM -0500, Andrew Davis wrote:
>> On 7/9/25 7:44 AM, Maxime Ripard wrote:
>>> Aside from the main CMA region, it can be useful to allow userspace to
>>> allocate from the other CMA reserved regions.
>>>
>>> Indeed, those regions can have specific properties that can be useful to
>>> a specific us-case.
>>>
>>> For example, one of them platform I've been with has ECC enabled on the
>>> entire memory but for a specific region. Using that region to allocate
>>> framebuffers can be particular beneficial because enabling the ECC has a
>>> performance and memory footprint cost.
>>>
>>> Thus, exposing these regions as heaps user-space can allocate from and
>>> import wherever needed allows to cover that use-case.
>>>
>>> For now, only shared-dma-pools regions with the reusable property (ie,
>>> backed by CMA) are supported, but eventually we'll want to support other
>>> DMA pools types.
>>>
>>> Signed-off-by: Maxime Ripard <mripard@...nel.org>
>>> ---
>>> drivers/dma-buf/heaps/cma_heap.c | 52 +++++++++++++++++++++++++++++++++++++++-
>>> 1 file changed, 51 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
>>> index 0df007111975447d555714d61ead9699287fd65a..31a18683ee25788a800f3f878fd958718a930ff7 100644
>>> --- a/drivers/dma-buf/heaps/cma_heap.c
>>> +++ b/drivers/dma-buf/heaps/cma_heap.c
>>> @@ -19,10 +19,12 @@
>>> #include <linux/err.h>
>>> #include <linux/highmem.h>
>>> #include <linux/io.h>
>>> #include <linux/mm.h>
>>> #include <linux/module.h>
>>> +#include <linux/of.h>
>>> +#include <linux/of_reserved_mem.h>
>>> #include <linux/scatterlist.h>
>>> #include <linux/slab.h>
>>> #include <linux/vmalloc.h>
>>> #define DEFAULT_CMA_NAME "default_cma_region"
>>> @@ -421,7 +423,55 @@ static int __init add_default_cma_heap(void)
>>> ERR_PTR(ret));
>>> }
>>> return 0;
>>> }
>>> -module_init(add_default_cma_heap);
>>> +
>>> +static int __init add_cma_heaps(void)
>>> +{
>>> + struct device_node *rmem_node;
>>> + struct device_node *node;
>>> + int ret;
>>> +
>>> + ret = add_default_cma_heap();
>>
>> Will this double add the default CMA region if it was declared
>> using DT (reserved-memory) when all those nodes are again scanned
>> through below? Might need a check in that loop for linux,cma-default.
>
> Yeah, but we probably should anyway. Otherwise, if linux,cma-default
> ever change on a platform, we would get heaps appearing/disappearing as
> we reboot, which doesn't sound great from a regression perspective.
>
> Both would allocate from the same pool though, so we don't risk stepping
> into each others toes. Or am I missing something?
>
You are not missing anything, having both wouldn't cause anything to break,
but would cause heaps to appear/disappear based on how the CMA region was
defined (DT vs kernel cmd line) which we should avoid.
Andrew
>>> + if (ret)
>>> + return ret;
>>> +
>>> + rmem_node = of_find_node_by_path("/reserved-memory");
>>> + if (!rmem_node)
>>> + goto out;
>>
>> Can just return here, "out" path doesn't need to put a NULL node.
>
> Oh, right. Thanks!
> Maxime
Powered by blists - more mailing lists