[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <175e0e1e-90cd-5c69-69a3-9f44462679e3@redhat.com>
Date: Wed, 25 Aug 2021 14:11:23 +0200
From: David Hildenbrand <david@...hat.com>
To: Mike Rapoport <rppt@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Michal Simek <monstr@...str.eu>,
Mike Rapoport <rppt@...ux.ibm.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 1/4] mm/page_alloc: always initialize memory map for the
holes
On 14.07.21 14:37, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@...ux.ibm.com>
>
> Currently memory map for the holes is initialized only when SPARSEMEM
> memory model is used. Yet, even with FLATMEM there could be holes in the
> physical memory layout that have memory map entries.
>
> For instance, the memory reserved using e820 API on i386 or
> "reserved-memory" nodes in device tree would not appear in memblock.memory
> and hence the struct pages for such holes will be skipped during memory map
> initialization.
>
> These struct pages will be zeroed because the memory map for FLATMEM
> systems is allocated with memblock_alloc_node() that clears the allocated
> memory. While zeroed struct pages do not cause immediate problems, the
> correct behaviour is to initialize every page using __init_single_page().
> Besides, enabling page poison for FLATMEM case will trigger
> PF_POISONED_CHECK() unless the memory map is properly initialized.
>
> Make sure init_unavailable_range() is called for both SPARSEMEM and FLATMEM
> so that struct pages representing memory holes would appear as PG_Reserved
> with any memory layout.
>
> Signed-off-by: Mike Rapoport <rppt@...ux.ibm.com>
> ---
> mm/page_alloc.c | 8 --------
> 1 file changed, 8 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3b97e17806be..878d7af4403d 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -6624,7 +6624,6 @@ static void __meminit zone_init_free_lists(struct zone *zone)
> }
> }
>
> -#if !defined(CONFIG_FLATMEM)
> /*
> * Only struct pages that correspond to ranges defined by memblock.memory
> * are zeroed and initialized by going through __init_single_page() during
> @@ -6669,13 +6668,6 @@ static void __init init_unavailable_range(unsigned long spfn,
> pr_info("On node %d, zone %s: %lld pages in unavailable ranges",
> node, zone_names[zone], pgcnt);
> }
> -#else
> -static inline void init_unavailable_range(unsigned long spfn,
> - unsigned long epfn,
> - int zone, int node)
> -{
> -}
> -#endif
>
> static void __init memmap_init_zone_range(struct zone *zone,
> unsigned long start_pfn,
>
Acked-by: David Hildenbrand <david@...hat.com>
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists