lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251130172939.574999-1-swarajgaikwad1925@gmail.com>
Date: Sun, 30 Nov 2025 17:29:39 +0000
From: Swaraj Gaikwad <swarajgaikwad1925@...il.com>
To: Mike Rapoport <rppt@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-mm@...ck.org (open list:MEMBLOCK AND MEMORY MANAGEMENT INITIALIZATION),
	linux-kernel@...r.kernel.org (open list)
Cc: skhan@...uxfoundation.org,
	david.hunter.linux@...il.com,
	Swaraj Gaikwad <swarajgaikwad1925@...il.com>
Subject: [PATCH RFC] mm/memblock: Fix reserve_mem allocation overlapping KHO scratch regions

Currently, `reserve_mem=` does not check for overlap with these KHO
scratch areas. As a result, a memblock allocation may land inside a
KHO-provided scratch region, leading to corruption or loss of the data.
Noted by the following TODO:
  /* TODO: Allocation must be outside of scratch region */
This RFC proposes extending `reserve_mem()` to allocate memory *only* in
gaps outside the KHO scratch intervals. The logic is:

  1. Walk through all KHO scratch ranges (kho_scratch[]).
  2. Attempt allocation in each safe gap:
        [curr_start_addr, scratch_start)
  3. If not found, attempt to allocate after the last scratch block.
  4. If all attempts fail, return -ENOMEM.

The allocation is done via `memblock_phys_alloc_range()`, which already
supports constrained range allocation and preserves alignment guarantees.

This is posted as an RFC because I would like feedback on:

  - Whether the allocation-gap scanning approach is acceptable.
  - Whether this logic belongs in reserve_mem() or should be abstracted
    into a helper for reuse.
  - I would appreciate guidance on testing this change.


Signed-off-by: Swaraj Gaikwad <swarajgaikwad1925@...il.com>
---
 mm/memblock.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index e23e16618e9b..7605a0b2b64e 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -2684,8 +2684,22 @@ static int __init reserve_mem(char *p)
 	if (reserve_mem_kho_revive(name, size, align))
 		return 1;

-	/* TODO: Allocation must be outside of scratch region */
-	start = memblock_phys_alloc(size, align);
+	phys_addr_t scratch_start, scratch_end;
+	phys_addr_t curr_start_addr = 0;
+	phys_addr_t alloc_end_addr = MEMBLOCK_ALLOC_ACCESSIBLE;
+	unsigned int i;
+
+	for (i = 0; i < kho_scratch_cnt; i++) {
+		scratch_start = kho_scratch[i].addr;
+		scratch_end = kho_scratch[i].addr + kho_scratch[i].size;
+		alloc_end_addr = scratch_start;
+		if (alloc_end_addr > curr_start_addr) {
+			start = memblock_phys_alloc_range(size, align, curr_start_addr, alloc_end_addr);
+			if (start)
+				break;
+		}
+   curr_start_addr = scratch_end;
+	}
 	if (!start)
 		return -ENOMEM;


base-commit: 2178727587e1eaa930b8266377119ed6043067df
--
2.52.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ