[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zw-qR3fcOnXHRMf8@kernel.org>
Date: Wed, 16 Oct 2024 14:57:59 +0300
From: Mike Rapoport <rppt@...nel.org>
To: Su Hua <suhua.tanke@...il.com>
Cc: akpm@...ux-foundation.org, muchun.song@...ux.dev, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, suhua <suhua1@...gsoft.com>
Subject: Re: [PATCH v1] memblock: Initialized the memory of memblock.reserve
to the MIGRATE_MOVABL
Hi,
On Sat, Oct 12, 2024 at 11:55:31AM +0800, Su Hua wrote:
> Hi Mike
>
> Thanks for your advice and sorry for taking so long to reply.
Please don't top-post on the Linux kernel mailing lists
> I looked at the logic again. deferred_init_pages is currently used to
> handle all (memory &&! reserved) area memblock,and put that memory in
> buddy.
> Change it to also handle reserved memory may involve more code
> changes. I wonder if I can change the commit message: This patch is
> mainly to
> make the migration type to MIGRATE_MOVABLE when the reserve type page
> is initialized, regardless of whether CONFIG_DEFERRED_STRUCT_PAGE_INIT
> is set or not?
>
> When not set CONFIG_DEFERRED_STRUCT_PAGE_INIT, initializes memblock of
> reserve type to MIGRATE_MOVABLE by default at memmap_init initializes
> memory.
This should be more clearly emphasized in the commit message.
> Sincerely yours,
> Su
>
>
> Mike Rapoport <rppt@...nel.org> 于2024年9月29日周日 17:18写道:
> >
> > On Wed, Sep 25, 2024 at 07:02:35PM +0800, suhua wrote:
> > > After sparse_init function requests memory for struct page in memblock and
> > > adds it to memblock.reserved, this memory area is present in both
> > > memblock.memory and memblock.reserved.
> > >
> > > When CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set. The memmap_init function
> > > is called during the initialization of the free area of the zone, this
> > > function calls for_each_mem_pfn_range to initialize all memblock.memory,
> > > excluding memory that is also placed in memblock.reserved, such as the
> > > struct page metadata that describes the page, 1TB memory is about 16GB,
> > > and generally this part of reserved memory occupies more than 90% of the
> > > total reserved memory of the system. So all memory in memblock.memory is
> > > set to MIGRATE_MOVABLE according to the alignment of pageblock_nr_pages.
> > > For example, if hugetlb_optimize_vmemmap=1, huge pages are allocated, the
> > > freed pages are placed on buddy's MIGRATE_MOVABL list for use.
> >
> > Please make sure you spell MIGRATE_MOVABLE and MIGRATE_UNMOVABLE correctly.
> >
> > > When CONFIG_DEFERRED_STRUCT_PAGE_INIT=y, only the first_deferred_pfn range
> > > is initialized in memmap_init. The subsequent free_low_memory_core_early
> > > initializes all memblock.reserved memory but not MIGRATE_MOVABL. All
> > > memblock.memory is set to MIGRATE_MOVABL when it is placed in buddy via
> > > free_low_memory_core_early and deferred_init_memmap. As a result, when
> > > hugetlb_optimize_vmemmap=1 and huge pages are allocated, the freed pages
> > > will be placed on buddy's MIGRATE_UNMOVABL list (For example, on machines
> > > with 1TB of memory, alloc 2MB huge page size of 1000GB frees up about 15GB
> > > to MIGRATE_UNMOVABL). Since the huge page alloc requires a MIGRATE_MOVABL
> > > page, a fallback is performed to alloc memory from MIGRATE_UNMOVABL for
> > > MIGRATE_MOVABL.
> > >
> > > Large amount of UNMOVABL memory is not conducive to defragmentation, so
> > > the reserved memory is also set to MIGRATE_MOVABLE in the
> > > free_low_memory_core_early phase following the alignment of
> > > pageblock_nr_pages.
> > >
> > > Eg:
> > > echo 500000 > /proc/sys/vm/nr_hugepages
> > > cat /proc/pagetypeinfo
> > >
> > > before:
> > > Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10
> > > …
> > > Node 0, zone Normal, type Unmovable 51 2 1 28 53 35 35 43 40 69 3852
> > > Node 0, zone Normal, type Movable 6485 4610 666 202 200 185 208 87 54 2 240
> > > Node 0, zone Normal, type Reclaimable 2 2 1 23 13 1 2 1 0 1 0
> > > Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0
> > > Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0
> > > Unmovable ≈ 15GB
> > >
> > > after:
> > > Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10
> > > …
> > > Node 0, zone Normal, type Unmovable 0 1 1 0 0 0 0 1 1 1 0
> > > Node 0, zone Normal, type Movable 1563 4107 1119 189 256 368 286 132 109 4 3841
> > > Node 0, zone Normal, type Reclaimable 2 2 1 23 13 1 2 1 0 1 0
> > > Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0
> > > Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0
> > >
> > > Signed-off-by: suhua <suhua1@...gsoft.com>
> > > ---
> > > mm/mm_init.c | 6 ++++++
> > > 1 file changed, 6 insertions(+)
> > >
> > > diff --git a/mm/mm_init.c b/mm/mm_init.c
> > > index 4ba5607aaf19..e0190e3f8f26 100644
> > > --- a/mm/mm_init.c
> > > +++ b/mm/mm_init.c
> > > @@ -722,6 +722,12 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid)
> > > if (zone_spans_pfn(zone, pfn))
> > > break;
> > > }
> > > +
> > > + if (pageblock_aligned(pfn)) {
> > > + set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_MOVABLE);
> > > + cond_resched();
No need to call cond_resched() here
> > > + }
> > > +
> > > __init_single_page(pfn_to_page(pfn), pfn, zid, nid);
> > > }
> > > #else
> > > --
> > > 2.34.1
> > >
> >
> > --
> > Sincerely yours,
> > Mike.
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists