[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aba22290-3577-44fa-97b3-71abd3429de7@redhat.com>
Date: Wed, 17 Sep 2025 15:29:51 +0200
From: David Hildenbrand <david@...hat.com>
To: Alexander Potapenko <glider@...gle.com>
Cc: akpm@...ux-foundation.org, vbabka@...e.cz, rppt@...nel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org, elver@...gle.com,
dvyukov@...gle.com, kasan-dev@...glegroups.com,
Aleksandr Nogikh <nogikh@...gle.com>
Subject: Re: [PATCH v1] mm/memblock: Correct totalram_pages accounting with
KMSAN
On 17.09.25 14:32, Alexander Potapenko wrote:
> When KMSAN is enabled, `kmsan_memblock_free_pages()` can hold back pages
> for metadata instead of returning them to the early allocator. The callers,
> however, would unconditionally increment `totalram_pages`, assuming the
> pages were always freed. This resulted in an incorrect calculation of the
> total available RAM, causing the kernel to believe it had more memory than
> it actually did.
>
> This patch refactors `memblock_free_pages()` to return the number of pages
> it successfully frees. If KMSAN stashes the pages, the function now
> returns 0; otherwise, it returns the number of pages in the block.
>
> The callers in `memblock.c` have been updated to use this return value,
> ensuring that `totalram_pages` is incremented only by the number of pages
> actually returned to the allocator. This corrects the total RAM accounting
> when KMSAN is active.
>
> Cc: Aleksandr Nogikh <nogikh@...gle.com>
> Fixes: 3c2065098260 ("init: kmsan: call KMSAN initialization routines")
> Signed-off-by: Alexander Potapenko <glider@...gle.com>
> ---
> mm/internal.h | 4 ++--
> mm/memblock.c | 18 +++++++++---------
> mm/mm_init.c | 9 +++++----
> 3 files changed, 16 insertions(+), 15 deletions(-)
>
> diff --git a/mm/internal.h b/mm/internal.h
> index 45b725c3dc030..ae1ee6e02eff9 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -742,8 +742,8 @@ static inline void clear_zone_contiguous(struct zone *zone)
> extern int __isolate_free_page(struct page *page, unsigned int order);
> extern void __putback_isolated_page(struct page *page, unsigned int order,
> int mt);
> -extern void memblock_free_pages(struct page *page, unsigned long pfn,
> - unsigned int order);
> +extern unsigned long memblock_free_pages(struct page *page, unsigned long pfn,
> + unsigned int order);
> extern void __free_pages_core(struct page *page, unsigned int order,
> enum meminit_context context);
>
> diff --git a/mm/memblock.c b/mm/memblock.c
> index 117d963e677c9..de7ff644d8f4f 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -1834,10 +1834,9 @@ void __init memblock_free_late(phys_addr_t base, phys_addr_t size)
> cursor = PFN_UP(base);
> end = PFN_DOWN(base + size);
>
> - for (; cursor < end; cursor++) {
> - memblock_free_pages(pfn_to_page(cursor), cursor, 0);
> - totalram_pages_inc();
> - }
> + for (; cursor < end; cursor++)
> + totalram_pages_add(
> + memblock_free_pages(pfn_to_page(cursor), cursor, 0));
> }
That part is clear. But for readability we should probably just do
if (memblock_free_pages(pfn_to_page(cursor), cursor, 0))
totalram_pages_inc();
Or use a temp variable as an alternative.
LGTM
Reviewed-by: David Hildenbrand <david@...hat.com>
--
Cheers
David / dhildenb
Powered by blists - more mailing lists