lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL_JsqKvRQdbBOsz4_mWAJ+MDcM+6hdhgyT6hd22z7Oi9bGAkA@mail.gmail.com>
Date: Fri, 28 Feb 2025 08:33:57 -0600
From: Rob Herring <robh@...nel.org>
To: Marco Felsch <m.felsch@...gutronix.de>
Cc: Oreoluwa Babatunde <quic_obabatun@...cinc.com>, aisheng.dong@....com, 
	andy@...ck.fi.intel.com, catalin.marinas@....com, devicetree@...r.kernel.org, 
	hch@....de, iommu@...ts.linux.dev, kernel@...cinc.com, klarasmodin@...il.com, 
	linux-kernel@...r.kernel.org, m.szyprowski@...sung.com, 
	quic_ninanaik@...cinc.com, robin.murphy@....com, saravanak@...gle.com, 
	will@...nel.org, stable@...r.kernel.org, kernel@...gutronix.de, 
	sashal@...nel.org
Subject: Re: [PATCH v10 1/2] of: reserved_mem: Restruture how the reserved
 memory regions are processed

On Wed, Feb 26, 2025 at 5:51 AM Marco Felsch <m.felsch@...gutronix.de> wrote:
>
> Hi,
>
> On 24-10-08, Oreoluwa Babatunde wrote:
> > Reserved memory regions defined in the devicetree can be broken up into
> > two groups:
> > i) Statically-placed reserved memory regions
> > i.e. regions defined with a static start address and size using the
> >      "reg" property.
> > ii) Dynamically-placed reserved memory regions.
> > i.e. regions defined by specifying an address range where they can be
> >      placed in memory using the "alloc_ranges" and "size" properties.
> >
> > These regions are processed and set aside at boot time.
> > This is done in two stages as seen below:
> >
> > Stage 1:
> > At this stage, fdt_scan_reserved_mem() scans through the child nodes of
> > the reserved_memory node using the flattened devicetree and does the
> > following:
> >
> > 1) If the node represents a statically-placed reserved memory region,
> >    i.e. if it is defined using the "reg" property:
> >    - Call memblock_reserve() or memblock_mark_nomap() as needed.
> >    - Add the information for that region into the reserved_mem array
> >      using fdt_reserved_mem_save_node().
> >      i.e. fdt_reserved_mem_save_node(node, name, base, size).
> >
> > 2) If the node represents a dynamically-placed reserved memory region,
> >    i.e. if it is defined using "alloc-ranges" and "size" properties:
> >    - Add the information for that region to the reserved_mem array with
> >      the starting address and size set to 0.
> >      i.e. fdt_reserved_mem_save_node(node, name, 0, 0).
> >    Note: This region is saved to the array with a starting address of 0
> >    because a starting address is not yet allocated for it.
> >
> > Stage 2:
> > After iterating through all the reserved memory nodes and storing their
> > relevant information in the reserved_mem array,fdt_init_reserved_mem() is
> > called and does the following:
> >
> > 1) For statically-placed reserved memory regions:
> >    - Call the region specific init function using
> >      __reserved_mem_init_node().
> > 2) For dynamically-placed reserved memory regions:
> >    - Call __reserved_mem_alloc_size() which is used to allocate memory
> >      for each of these regions, and mark them as nomap if they have the
> >      nomap property specified in the DT.
> >    - Call the region specific init function.
> >
> > The current size of the resvered_mem array is 64 as is defined by
> > MAX_RESERVED_REGIONS. This means that there is a limitation of 64 for
> > how many reserved memory regions can be specified on a system.
> > As systems continue to grow more and more complex, the number of
> > reserved memory regions needed are also growing and are starting to hit
> > this 64 count limit, hence the need to make the reserved_mem array
> > dynamically sized (i.e. dynamically allocating memory for the
> > reserved_mem array using membock_alloc_*).
> >
> > On architectures such as arm64, memory allocated using memblock is
> > writable only after the page tables have been setup. This means that if
> > the reserved_mem array is going to be dynamically allocated, it needs to
> > happen after the page tables have been setup, not before.
> >
> > Since the reserved memory regions are currently being processed and
> > added to the array before the page tables are setup, there is a need to
> > change the order in which some of the processing is done to allow for
> > the reserved_mem array to be dynamically sized.
> >
> > It is possible to process the statically-placed reserved memory regions
> > without needing to store them in the reserved_mem array until after the
> > page tables have been setup because all the information stored in the
> > array is readily available in the devicetree and can be referenced at
> > any time.
> > Dynamically-placed reserved memory regions on the other hand get
> > assigned a start address only at runtime, and hence need a place to be
> > stored once they are allocated since there is no other referrence to the
> > start address for these regions.
> >
> > Hence this patch changes the processing order of the reserved memory
> > regions in the following ways:
> >
> > Step 1:
> > fdt_scan_reserved_mem() scans through the child nodes of
> > the reserved_memory node using the flattened devicetree and does the
> > following:
> >
> > 1) If the node represents a statically-placed reserved memory region,
> >    i.e. if it is defined using the "reg" property:
> >    - Call memblock_reserve() or memblock_mark_nomap() as needed.
> >
> > 2) If the node represents a dynamically-placed reserved memory region,
> >    i.e. if it is defined using "alloc-ranges" and "size" properties:
> >    - Call __reserved_mem_alloc_size() which will:
> >      i) Allocate memory for the reserved region and call
> >      memblock_mark_nomap() as needed.
> >      ii) Call the region specific initialization function using
> >      fdt_init_reserved_mem_node().
> >      iii) Save the region information in the reserved_mem array using
> >      fdt_reserved_mem_save_node().
> >
> > Step 2:
> > 1) This stage of the reserved memory processing is now only used to add
> >    the statically-placed reserved memory regions into the reserved_mem
> >    array using fdt_scan_reserved_mem_reg_nodes(), as well as call their
> >    region specific initialization functions.
> >
> > 2) This step has also been moved to be after the page tables are
> >    setup. Moving this will allow us to replace the reserved_mem
> >    array with a dynamically sized array before storing the rest of
> >    these regions.
> >
> > Signed-off-by: Oreoluwa Babatunde <quic_obabatun@...cinc.com>
> > ---
> >  drivers/of/fdt.c             |   5 +-
> >  drivers/of/of_private.h      |   3 +-
> >  drivers/of/of_reserved_mem.c | 168 ++++++++++++++++++++++++-----------
> >  3 files changed, 122 insertions(+), 54 deletions(-)
>
> this patch got into stable kernel 6.12.13++ as part of Stable-dep-of.
> The stable kernel commit is: 9a0fe62f93ede02c27aaca81112af1e59c8c0979.
>
> With the patch applied I see that the cma area pool is misplaced which
> cause my 4G device to fail to activate the cma pool. Below are some
> logs:
>
> *** Good case (6.12)
>
> root@...t:~# dmesg|grep -i cma
> [    0.000000] OF: reserved mem: initialized node linux,cma, compatible id shared-dma-pool
> [    0.000000] OF: reserved mem: 0x0000000044200000..0x00000000541fffff (262144 KiB) map reusable linux,cma
> [    0.056915] Memory: 3695024K/4194304K available (15552K kernel code, 2510K rwdata, 5992K rodata, 6016K init, 489K bss, 231772K reserved, 262144K cma-reserved)
>
> *** Bad (6.12.16)
>
> root@...t:~# dmesg|grep -i cma
> [    0.000000] Reserved memory: created CMA memory pool at 0x00000000f2000000, size 256 MiB
> [    0.000000] OF: reserved mem: initialized node linux,cma, compatible id shared-dma-pool
> [    0.000000] OF: reserved mem: 0x00000000f2000000..0x0000000101ffffff (262144 KiB) map reusable linux,cma

>                 /*
>                  * The CMA area must be in the lower 32-bit address range.
>                  */
>                 alloc-ranges = <0x0 0x42000000 0 0xc0000000>;

Are you expecting that 0xc0000000 is the end address rather than the
size? Because your range ends at 0x1_0200_0000. Looks to me like the
kernel correctly followed what the DT said was allowed. Why it moved,
I don't know. If you change 0xc0000000 to 0xbe000000, does it work?

Rob

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ