[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211215172023.327233538@linuxfoundation.org>
Date: Wed, 15 Dec 2021 18:21:36 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Mike Rapoport <rppt@...ux.ibm.com>,
Tony Lindgren <tony@...mide.com>,
Mark-PK Tsai <mark-pk.tsai@...iatek.com>
Subject: [PATCH 5.4 15/18] memblock: align freed memory map on pageblock boundaries with SPARSEMEM
From: Mike Rapoport <rppt@...ux.ibm.com>
commit f921f53e089a12a192808ac4319f28727b35dc0f upstream.
When CONFIG_SPARSEMEM=y the ranges of the memory map that are freed are not
aligned to the pageblock boundaries which breaks assumptions about
homogeneity of the memory map throughout core mm code.
Make sure that the freed memory map is always aligned on pageblock
boundaries regardless of the memory model selection.
Signed-off-by: Mike Rapoport <rppt@...ux.ibm.com>
Tested-by: Tony Lindgren <tony@...mide.com>
Link: https://lore.kernel.org/lkml/20210630071211.21011-1-rppt@kernel.org/
[backport upstream modification in mm/memblock.c to arch/arm/mm/init.c]
Signed-off-by: Mark-PK Tsai <mark-pk.tsai@...iatek.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/arm/mm/init.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -371,14 +371,14 @@ static void __init free_unused_memmap(vo
*/
start = min(start,
ALIGN(prev_end, PAGES_PER_SECTION));
-#else
+#endif
/*
* Align down here since many operations in VM subsystem
* presume that there are no holes in the memory map inside
* a pageblock
*/
start = round_down(start, pageblock_nr_pages);
-#endif
+
/*
* If we had a previous bank, and there is a space
* between the current bank and the previous, free it.
@@ -396,9 +396,11 @@ static void __init free_unused_memmap(vo
}
#ifdef CONFIG_SPARSEMEM
- if (!IS_ALIGNED(prev_end, PAGES_PER_SECTION))
+ if (!IS_ALIGNED(prev_end, PAGES_PER_SECTION)) {
+ prev_end = ALIGN(prev_end, pageblock_nr_pages);
free_memmap(prev_end,
ALIGN(prev_end, PAGES_PER_SECTION));
+ }
#endif
}
Powered by blists - more mailing lists