[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aKcUbInGFUiwCgrw@li-2b55cdcc-350b-11b2-a85c-a78bff51fc11.ibm.com>
Date: Thu, 21 Aug 2025 14:43:24 +0200
From: Sumanth Korikkar <sumanthk@...ux.ibm.com>
To: David Hildenbrand <david@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>, richard.weiyang@...il.com,
Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
Heiko Carstens <hca@...ux.ibm.com>, Vasily Gorbik <gor@...ux.ibm.com>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
linux-s390 <linux-s390@...r.kernel.org>
Subject: Re: [PATCH v4] mm: fix accounting of memmap pages
> > diff --git a/mm/sparse.c b/mm/sparse.c
> > index 066cbf82acb8..24323122f6cb 100644
> > --- a/mm/sparse.c
> > +++ b/mm/sparse.c
> > @@ -454,9 +454,6 @@ static void __init sparse_buffer_init(unsigned long size, int nid)
> > */
> > sparsemap_buf = memmap_alloc(size, section_map_size(), addr, nid, true);
> > sparsemap_buf_end = sparsemap_buf + size;
> > -#ifndef CONFIG_SPARSEMEM_VMEMMAP
> > - memmap_boot_pages_add(DIV_ROUND_UP(size, PAGE_SIZE));
> > -#endif
> > }
> > static void __init sparse_buffer_fini(void)
> > @@ -567,6 +564,8 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
> > sparse_buffer_fini();
> > goto failed;
> > }
> > + memmap_boot_pages_add(DIV_ROUND_UP(PAGES_PER_SECTION * sizeof(struct page),
> > + PAGE_SIZE));
>
> IIRC, we can have partially populated boot sections, where only some
> subsections actually have a memmap ... so this calculation is possibly wrong
> in some cases.
In section_activate():
/*
* The early init code does not consider partially populated initial
* sections, it simply assumes that memory will never be referenced. If
* we hot-add memory into such a section then we do not need to populate
* the memmap and can simply reuse what is already there.
*/
if (nr_pages < PAGES_PER_SECTION && early_section(ms))
return pfn_to_page(pfn);
The patch ignores the accounting here, based on the comments
described above for partially populated initial sections.
memmap = populate_section_memmap(pfn, nr_pages, nid, altmap, pgmap);
if (!memmap) {
section_deactivate(pfn, nr_pages, altmap);
return ERR_PTR(-ENOMEM);
}
memmap_pages_add(DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE));
only bookkeeping for newly allocated memmap is performed.
Also before this patch, __populate_section_memmap() did memmap
accounting for !NULL usecases. This patch also does similar change,
but covers memmap accounting for both CONFIG_SPARSEMEM_VMEMMAP +
!CONFIG_SPARSEMEM_VMEMMAP usecases and memmap accounting based on
allocation resource.
Let me know, if this sounds right.
Thank you.
Powered by blists - more mailing lists