[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210714123739.16493-2-rppt@kernel.org>
Date: Wed, 14 Jul 2021 15:37:36 +0300
From: Mike Rapoport <rppt@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Michal Simek <monstr@...str.eu>, Mike Rapoport <rppt@...nel.org>,
Mike Rapoport <rppt@...ux.ibm.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: [PATCH 1/4] mm/page_alloc: always initialize memory map for the holes
From: Mike Rapoport <rppt@...ux.ibm.com>
Currently memory map for the holes is initialized only when SPARSEMEM
memory model is used. Yet, even with FLATMEM there could be holes in the
physical memory layout that have memory map entries.
For instance, the memory reserved using e820 API on i386 or
"reserved-memory" nodes in device tree would not appear in memblock.memory
and hence the struct pages for such holes will be skipped during memory map
initialization.
These struct pages will be zeroed because the memory map for FLATMEM
systems is allocated with memblock_alloc_node() that clears the allocated
memory. While zeroed struct pages do not cause immediate problems, the
correct behaviour is to initialize every page using __init_single_page().
Besides, enabling page poison for FLATMEM case will trigger
PF_POISONED_CHECK() unless the memory map is properly initialized.
Make sure init_unavailable_range() is called for both SPARSEMEM and FLATMEM
so that struct pages representing memory holes would appear as PG_Reserved
with any memory layout.
Signed-off-by: Mike Rapoport <rppt@...ux.ibm.com>
---
mm/page_alloc.c | 8 --------
1 file changed, 8 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3b97e17806be..878d7af4403d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6624,7 +6624,6 @@ static void __meminit zone_init_free_lists(struct zone *zone)
}
}
-#if !defined(CONFIG_FLATMEM)
/*
* Only struct pages that correspond to ranges defined by memblock.memory
* are zeroed and initialized by going through __init_single_page() during
@@ -6669,13 +6668,6 @@ static void __init init_unavailable_range(unsigned long spfn,
pr_info("On node %d, zone %s: %lld pages in unavailable ranges",
node, zone_names[zone], pgcnt);
}
-#else
-static inline void init_unavailable_range(unsigned long spfn,
- unsigned long epfn,
- int zone, int node)
-{
-}
-#endif
static void __init memmap_init_zone_range(struct zone *zone,
unsigned long start_pfn,
--
2.28.0
Powered by blists - more mailing lists