[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <34d34fac-9a8b-4496-a6db-725c40d0408b@redhat.com>
Date: Mon, 4 Aug 2025 17:24:36 +0200
From: David Hildenbrand <david@...hat.com>
To: Sumanth Korikkar <sumanthk@...ux.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Cc: Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
Heiko Carstens <hca@...ux.ibm.com>, Vasily Gorbik <gor@...ux.ibm.com>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
linux-s390 <linux-s390@...r.kernel.org>
Subject: Re: [PATCH v2] mm: fix accounting of memmap pages for early sections
On 04.08.25 17:13, Sumanth Korikkar wrote:
> memmap pages can be allocated either from the memblock (boot) allocator
> during early boot or from the buddy allocator.
>
> When these memmap pages are removed via arch_remove_memory(), the
> deallocation path depends on their source:
>
> * For pages from the buddy allocator, depopulate_section_memmap() is
> called, which should decrement the count of nr_memmap_pages.
>
> * For pages from the boot allocator, free_map_bootmem() is called, which
> should decrement the count of the nr_memmap_boot_pages.
>
> Ensure correct tracking of memmap pages for both early sections and non
> early sections by adjusting the accounting in section_deactivate().
>
> Cc: stable@...r.kernel.org
> Fixes: 15995a352474 ("mm: report per-page metadata information")
> Suggested-by: David Hildenbrand <david@...hat.com>
> Signed-off-by: Sumanth Korikkar <sumanthk@...ux.ibm.com>
> ---
> v2: consider accounting for !CONFIG_SPARSEMEM_VMEMMAP.
>
> mm/sparse.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 3c012cf83cc2..b9cc9e548f80 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -680,7 +680,6 @@ static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages,
> unsigned long start = (unsigned long) pfn_to_page(pfn);
> unsigned long end = start + nr_pages * sizeof(struct page);
>
> - memmap_pages_add(-1L * (DIV_ROUND_UP(end - start, PAGE_SIZE)));
> vmemmap_free(start, end, altmap);
> }
> static void free_map_bootmem(struct page *memmap)
> @@ -856,10 +855,14 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
> * The memmap of early sections is always fully populated. See
> * section_activate() and pfn_valid() .
> */
> - if (!section_is_early)
> + if (!section_is_early) {
> + memmap_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE)));
> depopulate_section_memmap(pfn, nr_pages, altmap);
> - else if (memmap)
> + } else if (memmap) {
> + memmap_boot_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page),
> + PAGE_SIZE)));
> free_map_bootmem(memmap);
> + }
>
> if (empty)
> ms->section_mem_map = (unsigned long)NULL;
Acked-by: David Hildenbrand <david@...hat.com>
Hopefully we're not missing anything important.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists