[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fad80ac4-0be4-410e-bd58-9a086f356c74@quicinc.com>
Date: Fri, 6 Sep 2024 12:00:12 -0700
From: Oreoluwa Babatunde <quic_obabatun@...cinc.com>
To: Aisheng Dong <aisheng.dong@....com>, "robh@...nel.org" <robh@...nel.org>
CC: "andy@...ck.fi.intel.com" <andy@...ck.fi.intel.com>,
"catalin.marinas@....com" <catalin.marinas@....com>,
"devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
"hch@....de"
<hch@....de>,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"kernel@...cinc.com" <kernel@...cinc.com>,
"klarasmodin@...il.com"
<klarasmodin@...il.com>,
"linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>,
"m.szyprowski@...sung.com"
<m.szyprowski@...sung.com>,
"robin.murphy@....com" <robin.murphy@....com>,
"saravanak@...gle.com" <saravanak@...gle.com>,
"will@...nel.org"
<will@...nel.org>,
"imx@...ts.linux.dev" <imx@...ts.linux.dev>,
Jacky Bai
<ping.bai@....com>, Pengfei Li <pengfei.li_1@....com>
Subject: Re: [PATCH v8 0/2] Dynamic Allocation of the reserved_mem array
On 9/3/2024 1:56 AM, Aisheng Dong wrote:
>> From: Oreoluwa Babatunde <quic_obabatun@...cinc.com>
>> Sent: 2024年8月31日 0:29
>> Subject: [PATCH v8 0/2] Dynamic Allocation of the reserved_mem array
>>
>> The reserved_mem array is used to store data for the different reserved
>> memory regions defined in the DT of a device. The array stores information
>> such as region name, node reference, start-address, and size of the different
>> reserved memory regions.
>>
>> The array is currently statically allocated with a size of
>> MAX_RESERVED_REGIONS(64). This means that any system that specifies a
>> number of reserved memory regions greater than
>> MAX_RESERVED_REGIONS(64) will not have enough space to store the
>> information for all the regions.
>>
>> This can be fixed by making the reserved_mem array a dynamically sized array
>> which is allocated using memblock_alloc() based on the exact number of
>> reserved memory regions defined in the DT.
>>
>> On architectures such as arm64, memblock allocated memory is not writable
>> until after the page tables have been setup.
>> This is an issue because the current implementation initializes the reserved
>> memory regions and stores their information in the array before the page
>> tables are setup. Hence, dynamically allocating the reserved_mem array and
>> attempting to write information to it at this point will fail.
>>
>> Therefore, the allocation of the reserved_mem array will need to be done after
>> the page tables have been setup, which means that the reserved memory
>> regions will also need to wait until after the page tables have been setup to be
>> stored in the array.
>>
>> When processing the reserved memory regions defined in the DT, these regions
>> are marked as reserved by calling memblock_reserve(base, size).
>> Where: base = base address of the reserved region.
>> size = the size of the reserved memory region.
>>
>> Depending on if that region is defined using the "no-map" property,
>> memblock_mark_nomap(base, size) is also called.
>>
>> The "no-map" property is used to indicate to the operating system that a
>> mapping of the specified region must NOT be created. This also means that no
>> access (including speculative accesses) is allowed on this region of memory
>> except when it is coming from the device driver that this region of memory is
>> being reserved for.[1]
>>
>> Therefore, it is important to call memblock_reserve() and
>> memblock_mark_nomap() on all the reserved memory regions before the
>> system sets up the page tables so that the system does not unknowingly
>> include any of the no-map reserved memory regions in the memory map.
>>
>> There are two ways to define how/where a reserved memory region is placed
>> in memory:
>> i) Statically-placed reserved memory regions i.e. regions defined with a set
>> start address and size using the
>> "reg" property in the DT.
>> ii) Dynamically-placed reserved memory regions.
>> i.e. regions defined by specifying a range of addresses where they can
>> be placed in memory using the "alloc_ranges" and "size" properties
>> in the DT.
>>
>> The dynamically-placed reserved memory regions get assigned a start address
>> only at runtime. And this needs to be done before the page tables are setup
>> so that memblock_reserve() and memblock_mark_nomap() can be called on
>> the allocated region as explained above.
>> Since the dynamically allocated reserved_mem array can only be available
>> after the page tables have been setup, the information for the
>> dynamically-placed reserved memory regions needs to be stored somewhere
>> temporarily until the reserved_mem array is available.
>>
>> Therefore, this series makes use of a temporary static array to store the
>> information of the dynamically-placed reserved memory regions until the
>> reserved_mem array is allocated.
>> Once the reserved_mem array is available, the information is copied over from
>> the temporary array into the reserved_mem array, and the memory for the
>> temporary array is freed back to the system.
>>
>> The information for the statically-placed reserved memory regions does not
>> need to be stored in a temporary array because their starting address is
>> already stored in the devicetree.
>> Once the reserved_mem array is allocated, the information for the
>> statically-placed reserved memory regions is added to the array.
>>
> I tested with MX8ULP that remoteproc became unwork after applying this patchset.
> The same issue exist in linux-next with tag next-20240819.
>
> Root cause is that this patchset breaks the API of_reserved_mem_device_init_by_idx()
> used by coherent dma (kernel/dma/contiguous.c) due to rmem->ops was not
> properly saved in fdt_init_reserved_mem_node() after calling reserved memory
> setup function. e.g. rmem_dma_setup.
>
> Regards
> Aisheng
>
Hi Aisheng,
I have uploaded another version of the patches with this
issue addressed. Please help test and confirm if the problem
is resolved on your board.
https://lore.kernel.org/all/20240906185400.3244416-1-quic_obabatun@quicinc.com/
Thank you!
Oreoluwa
Powered by blists - more mailing lists