lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231204041339.9902-1-quic_obabatun@quicinc.com>
Date:   Sun, 3 Dec 2023 20:13:33 -0800
From:   Oreoluwa Babatunde <quic_obabatun@...cinc.com>
To:     <catalin.marinas@....com>, <will@...nel.org>, <robh+dt@...nel.org>,
        <frowand.list@...il.com>
CC:     <linux-arm-kernel@...ts.infradead.org>,
        <linux-kernel@...r.kernel.org>, <devicetree@...r.kernel.org>,
        <linux-arm-msm@...r.kernel.org>, <kernel@...cinc.com>,
        Oreoluwa Babatunde <quic_obabatun@...cinc.com>
Subject: [RFC PATCH v2 0/6] Dynamic allocation of reserved_mem array.

The reserved_mem array is used to store the data of the different
reserved memory regions specified in the DT of a device.
The array stores information such as the name, node, starting address,
and size of a reserved memory region.

The array is currently statically allocated with a size of
MAX_RESERVED_REGIONS(64). This means that any system that specifies a
number of reserved memory regions greater than MAX_RESERVED_REGIONS(64)
will not have enough space to store the information for all the regions.

Therefore, this series extends the use of a static array for
reserved_mem, and introduces a dynamically allocated array using
memblock_alloc() based on the number of reserved memory regions
specified in the DT.

Memory gotten from memblock_alloc() is only writable after paging_init()
is called, but the reserved memory regions need to be reserved before
then so that the system does not create page table mappings for them.

Reserved memory regions can be divided into 2 groups.
i) Statically-placed reserved memory regions
i.e. regions defined in the DT using the @reg property.
ii) Dynamically-placed reserved memory regions.
i.e. regions specified in the DT using the @alloc_ranges
    and @size properties.

It is possible to call memblock_reserve() and memblock_mark_nomap() on
the statically-placed reserved memory regions and not need to save them
to the array until after paging_init(), but this is not possible for the
dynamically-placed reserved memory because the starting address of these
regions need to be stored somewhere after they are allocated.

Therefore, this series achieves the allocation and population of the
reserved_mem array in two steps:

1. Before paging_init()
   Before paging_init() is called, iterate through the reserved_mem
   nodes in the DT and do the following:
   - Allocate memory for dynamically-placed reserved memory regions and
     store their starting address in the static allocated reserved_mem
     array.
   - Call memblock_reserve() and memblock_mark_nomap() on all the
     reserved memory regions as needed.
   - Count the total number of reserved_mem nodes in the DT.

2. After paging_init()
   After paging_init() is called:
   - Allocate new memory for the reserved_mem array based on the number
     of reserved memory nodes in the DT.
   - Transfer all the information that was stored in the static array
     into the new array.
   - Store the rest of the reserved_mem regions in the new array.
     i.e. the statically-placed regions.

The static array is no longer needed after this point, but there is
currently no obvious way to free the memory. Therefore, the size of the
initial static array is now defined using a config option.
Because the array is used only before paging_init() to store the
dynamically-placed reserved memory regions, the required size can vary
from device to device. Therefore, scaling it can help get some memory
savings.

A possible solution to freeing the memory for the static array will be
to mark it as __initdata. This will automatically free the memory once
the init process is done running.
The reason why this is not pursued in this series is because of
the possibility of a use-after-free.
If the dynamic allocation of the reserved_mem array fails, then future
accesses of the reserved_mem array will still be referencing the static
array. When the init process ends and the memory is freed up, any
further attempts to use the reserved_mem array will result in a
use-after-free.

Note:

- The limitation to this approach is that there is still a limit of
  64 for dynamically reserved memory regions.
- Upon further review, the series might need to be split up/duplicated
  for other archs.


Oreoluwa Babatunde (6):
  of: reserved_mem: Change the order that reserved_mem regions are
    stored
  of: reserved_mem: Swicth call to unflatten_device_tree() to after
    paging_init()
  of: resevred_mem: Delay allocation of memory for dynamic regions
  of: reserved_mem: Add code to use unflattened DT for reserved_mem
    nodes
  of: reserved_mem: Add code to dynamically allocate reserved_mem array
  of: reserved_mem: Make MAX_RESERVED_REGIONS a config option

 arch/loongarch/kernel/setup.c      |   2 +-
 arch/mips/kernel/setup.c           |   3 +-
 arch/nios2/kernel/setup.c          |   4 +-
 arch/openrisc/kernel/setup.c       |   4 +-
 arch/powerpc/kernel/setup-common.c |   3 +
 arch/sh/kernel/setup.c             |   5 +-
 arch/um/kernel/dtb.c               |   1 -
 arch/um/kernel/um_arch.c           |   2 +
 arch/xtensa/kernel/setup.c         |   4 +-
 drivers/of/Kconfig                 |  13 +++
 drivers/of/fdt.c                   |  39 +++++--
 drivers/of/of_private.h            |   6 +-
 drivers/of/of_reserved_mem.c       | 175 +++++++++++++++++++++--------
 include/linux/of_reserved_mem.h    |   8 +-
 kernel/dma/coherent.c              |   4 +-
 kernel/dma/contiguous.c            |   8 +-
 kernel/dma/swiotlb.c               |  10 +-
 17 files changed, 205 insertions(+), 86 deletions(-)

-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ