[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6408bb64-4ab7-4d46-9c15-d7eddc9d4e39@quicinc.com>
Date: Fri, 30 Aug 2024 09:35:29 -0700
From: Oreoluwa Babatunde <quic_obabatun@...cinc.com>
To: Andy Shevchenko <andy@...ck.fi.intel.com>
CC: <robh@...nel.org>, <saravanak@...gle.com>, <klarasmodin@...il.com>,
<aisheng.dong@....com>, <hch@....de>, <m.szyprowski@...sung.com>,
<robin.murphy@....com>, <devicetree@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <iommu@...ts.linux.dev>,
<will@...nel.org>, <catalin.marinas@....com>, <kernel@...cinc.com>
Subject: Re: [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array
On 8/19/2024 10:23 AM, Andy Shevchenko wrote:
> On Fri, Aug 09, 2024 at 11:48:12AM -0700, Oreoluwa Babatunde wrote:
>> The reserved_mem array is used to store data for the different
>> reserved memory regions defined in the DT of a device. The array
>> stores information such as region name, node reference, start-address,
>> and size of the different reserved memory regions.
>>
>> The array is currently statically allocated with a size of
>> MAX_RESERVED_REGIONS(64). This means that any system that specifies a
>> number of reserved memory regions greater than MAX_RESERVED_REGIONS(64)
>> will not have enough space to store the information for all the regions.
>>
>> This can be fixed by making the reserved_mem array a dynamically sized
>> array which is allocated using memblock_alloc() based on the exact
>> number of reserved memory regions defined in the DT.
>>
>> On architectures such as arm64, memblock allocated memory is not
>> writable until after the page tables have been setup.
>> This is an issue because the current implementation initializes the
>> reserved memory regions and stores their information in the array before
>> the page tables are setup. Hence, dynamically allocating the
>> reserved_mem array and attempting to write information to it at this
>> point will fail.
>>
>> Therefore, the allocation of the reserved_mem array will need to be done
>> after the page tables have been setup, which means that the reserved
>> memory regions will also need to wait until after the page tables have
>> been setup to be stored in the array.
>>
>> When processing the reserved memory regions defined in the DT, these
>> regions are marked as reserved by calling memblock_reserve(base, size).
>> Where: base = base address of the reserved region.
>> size = the size of the reserved memory region.
>>
>> Depending on if that region is defined using the "no-map" property,
>> memblock_mark_nomap(base, size) is also called.
>>
>> The "no-map" property is used to indicate to the operating system that a
>> mapping of the specified region must NOT be created. This also means
>> that no access (including speculative accesses) is allowed on this
>> region of memory except when it is coming from the device driver that
>> this region of memory is being reserved for.[1]
>>
>> Therefore, it is important to call memblock_reserve() and
>> memblock_mark_nomap() on all the reserved memory regions before the
>> system sets up the page tables so that the system does not unknowingly
>> include any of the no-map reserved memory regions in the memory map.
>>
>> There are two ways to define how/where a reserved memory region is
>> placed in memory:
>> i) Statically-placed reserved memory regions
>> i.e. regions defined with a set start address and size using the
>> "reg" property in the DT.
>> ii) Dynamically-placed reserved memory regions.
>> i.e. regions defined by specifying a range of addresses where they can
>> be placed in memory using the "alloc_ranges" and "size" properties
>> in the DT.
>>
>> The dynamically-placed reserved memory regions get assigned a start
>> address only at runtime. And this needs to be done before the page
>> tables are setup so that memblock_reserve() and memblock_mark_nomap()
>> can be called on the allocated region as explained above.
>> Since the dynamically allocated reserved_mem array can only be
>> available after the page tables have been setup, the information for
>> the dynamically-placed reserved memory regions needs to be stored
>> somewhere temporarily until the reserved_mem array is available.
>>
>> Therefore, this series makes use of a temporary static array to store
>> the information of the dynamically-placed reserved memory regions until
>> the reserved_mem array is allocated.
>> Once the reserved_mem array is available, the information is copied over
>> from the temporary array into the reserved_mem array, and the memory for
>> the temporary array is freed back to the system.
>>
>> The information for the statically-placed reserved memory regions does
>> not need to be stored in a temporary array because their starting
>> address is already stored in the devicetree.
>> Once the reserved_mem array is allocated, the information for the
>> statically-placed reserved memory regions is added to the array.
>>
>> Note:
>> Because of the use of a temporary array to store the information of the
>> dynamically-placed reserved memory regions, there still exists a
>> limitation of 64 for this particular kind of reserved memory regions.
>> >From my observation, these regions are typically small in number and
>> hence I expect this to not be an issue for now.
>
> This series (in particular the first patch) broke boot on Intel Meteor
> Lake-P. Taking Linux next of 20240819 with these being reverted makes
> things work again.
>
> Taking into account bisectability issue (that's how I noticed the issue
> in the first place) I think it would be nice to have no such patches at
> all in the respective subsystem tree. On my side I may help with testing
> whatever solution or next version provides.
Hi Andy,
I have re-uploaded another version of my patches.
Please can you help test it on your platform to confirm
that the issue is no longer present?
https://lore.kernel.org/all/20240830162857.2821502-1-quic_obabatun@quicinc.com/
Thank you!
Oreoluwa
Powered by blists - more mailing lists