[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZxStKvw6HwminDub@kernel.org>
Date: Sun, 20 Oct 2024 10:11:38 +0300
From: Mike Rapoport <rppt@...nel.org>
To: suhua <suhua.tanke@...il.com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, suhua <suhua1@...gsoft.com>
Subject: Re: [PATCH] memblock: Uniform initialization all reserved pages to
MIGRATE_MOVABLE
On Thu, Oct 17, 2024 at 02:44:49PM +0800, suhua wrote:
> Subject: memblock: Uniform initialization all reserved pages to MIGRATE_MOVABLE
I'd suggest:
memblock: uniformly initialize all reserved pages to MIGRATE_MOVABLE
> Currently when CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set, the reserved
> pages are initialized to MIGRATE_MOVABLE by default in memmap_init.
>
> Reserved memory mainly stores the metadata of struct page. When
> HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON=Y and hugepages are allocated,
> the memory occupied by the struct page metadata will be freed.
The struct page metadata is not freed with HVO, it is rather pages used for
vmemmap.
> Before this patch:
> when CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set, the freed memory was
> placed on the Movable list;
> When CONFIG_DEFERRED_STRUCT_PAGE_INIT=Y, the freed memory was placed on
> the Unmovable list.
>
> After this patch, the freed memory is placed on the Movable list
> regardless of whether CONFIG_DEFERRED_STRUCT_PAGE_INIT is set.
>
> Eg:
Please add back the description of the hardware used for this test and how
much huge pages were allocated at boot.
> echo 500000 > /proc/sys/vm/nr_hugepages
> cat /proc/pagetypeinfo
>
> before:
> Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10
> …
> Node 0, zone Normal, type Unmovable 51 2 1 28 53 35 35 43 40 69 3852
> Node 0, zone Normal, type Movable 6485 4610 666 202 200 185 208 87 54 2 240
> Node 0, zone Normal, type Reclaimable 2 2 1 23 13 1 2 1 0 1 0
> Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0
> Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0
> Unmovable ≈ 15GB
>
> after:
> Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10
> …
> Node 0, zone Normal, type Unmovable 0 1 1 0 0 0 0 1 1 1 0
> Node 0, zone Normal, type Movable 1563 4107 1119 189 256 368 286 132 109 4 3841
> Node 0, zone Normal, type Reclaimable 2 2 1 23 13 1 2 1 0 1 0
> Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0
> Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0
>
> Signed-off-by: suhua <suhua1@...gsoft.com>
checkpatch.pl gives this warning:
WARNING: From:/Signed-off-by: email address mismatch: 'From: suhua <suhua.tanke@...il.com>' != 'Signed-off-by: suhua <suhua1@...gsoft.com>'
Please update the commit authorship or signed-off to match.
Also, Signed-off-by should use a known identity, i.e. Name Lastname.
> ---
> mm/mm_init.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index 4ba5607aaf19..6dbf2df23eee 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -722,6 +722,10 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid)
> if (zone_spans_pfn(zone, pfn))
> break;
> }
> +
> + if (pageblock_aligned(pfn))
> + set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_MOVABLE);
> +
> __init_single_page(pfn_to_page(pfn), pfn, zid, nid);
> }
> #else
> --
> 2.34.1
>
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists