lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1e259390-67b1-4d08-8174-a65f1fc9eccc@redhat.com>
Date: Mon, 4 Aug 2025 14:27:20 +0200
From: David Hildenbrand <david@...hat.com>
To: Sumanth Korikkar <sumanthk@...ux.ibm.com>,
 Andrew Morton <akpm@...ux-foundation.org>, linux-mm <linux-mm@...ck.org>,
 LKML <linux-kernel@...r.kernel.org>
Cc: Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
 Heiko Carstens <hca@...ux.ibm.com>, Vasily Gorbik <gor@...ux.ibm.com>,
 Alexander Gordeev <agordeev@...ux.ibm.com>,
 linux-s390 <linux-s390@...r.kernel.org>
Subject: Re: [PATCH] mm: fix accounting of memmap pages for early sections

On 04.08.25 11:08, Sumanth Korikkar wrote:
> memmap pages  can be allocated either from the memblock (boot) allocator
> during early boot or from the buddy allocator.
> 
> When these memmap pages are removed via arch_remove_memory(), the
> deallocation path depends on their source:
> 
> * For pages from the buddy allocator, depopulate_section_memmap() is
>    called, which also decrements the count of nr_memmap_pages.
> 
> * For pages from the boot allocator, free_map_bootmem() is called. But
>    it currently does not adjust the nr_memmap_boot_pages.
> 
> To fix this inconsistency, update free_map_bootmem() to also decrement
> the nr_memmap_boot_pages count by invoking memmap_boot_pages_add(),
> mirroring how free_vmemmap_page() handles this for boot-allocated pages.
> 
> This ensures correct tracking of memmap pages regardless of allocation
> source.
> 
> Cc: stable@...r.kernel.org
> Fixes: 15995a352474 ("mm: report per-page metadata information")
> Signed-off-by: Sumanth Korikkar <sumanthk@...ux.ibm.com>
> ---
>   mm/sparse.c | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 3c012cf83cc2..d7c128015397 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -688,6 +688,7 @@ static void free_map_bootmem(struct page *memmap)
>   	unsigned long start = (unsigned long)memmap;
>   	unsigned long end = (unsigned long)(memmap + PAGES_PER_SECTION);
>   
> +	memmap_boot_pages_add(-1L * (DIV_ROUND_UP(end - start, PAGE_SIZE)));
>   	vmemmap_free(start, end, NULL);
>   }
>   

Looks good to me. But now I wonder about !CONFIG_SPARSEMEM_VMEMMAP, 
where neither depopulate_section_memmap() nor free_map_bootmem() adjust 
anything?

Which makes me wonder whether we should be moving that to 
section_deactivate().

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ