[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240318142138.783350-1-bhe@redhat.com>
Date: Mon, 18 Mar 2024 22:21:32 +0800
From: Baoquan He <bhe@...hat.com>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org,
x86@...nel.org,
linuxppc-dev@...ts.ozlabs.org,
akpm@...ux-foundation.org,
rppt@...nel.org,
Baoquan He <bhe@...hat.com>
Subject: [PATCH 0/6] mm/mm_init.c: refactor free_area_init_core()
In function free_area_init_core(), the code calculating
zone->managed_pages and the subtracting dma_reserve from DMA zone looks
very confusing.
>From git history, the code calculating zone->managed_pages was for
zone->present_pages originally. The early rough assignment is for
optimize zone's pcp and water mark setting. Later, managed_pages was
introduced into zone to represent the number of managed pages by buddy.
Now, zone->managed_pages is zeroed out and reset in mem_init() when
calling memblock_free_all(). zone's pcp and wmark setting relying on
actual zone->managed_pages are done later than mem_init() invocation.
So we don't need rush to early calculate and set zone->managed_pages,
just set it as zone->present_pages, will adjust it in mem_init().
And also add a new function calc_nr_kernel_pages() to count up free but
not reserved pages in memblock, then assign it to nr_all_pages and
nr_kernel_pages after memmap pages are allocated.
Baoquan He (6):
mm/mm_init.c: remove the useless dma_reserve
x86: remove unneeded memblock_find_dma_reserve()
mm/mm_init.c: add new function calc_nr_all_pages()
mm/mm_init.c: remove meaningless calculation of zone->managed_pages in
free_area_init_core()
mm/mm_init.c: remove unneeded calc_memmap_size()
mm/mm_init.c: remove arch_reserved_kernel_pages()
arch/powerpc/include/asm/mmu.h | 4 --
arch/powerpc/kernel/fadump.c | 5 --
arch/x86/include/asm/pgtable.h | 1 -
arch/x86/kernel/setup.c | 2 -
arch/x86/mm/init.c | 47 -------------
include/linux/mm.h | 4 --
mm/mm_init.c | 117 +++++++++------------------------
7 files changed, 30 insertions(+), 150 deletions(-)
--
2.41.0
Powered by blists - more mailing lists