[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250819095223.ckjdsii4gc6u4nec@master>
Date: Tue, 19 Aug 2025 09:52:23 +0000
From: Wei Yang <richard.weiyang@...il.com>
To: Mike Rapoport <rppt@...nel.org>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Bill Wendling <morbo@...gle.com>,
Daniel Jordan <daniel.m.jordan@...cle.com>,
Justin Stitt <justinstitt@...gle.com>,
Michael Ellerman <mpe@...erman.id.au>,
Miguel Ojeda <ojeda@...nel.org>,
Nathan Chancellor <nathan@...nel.org>,
Nick Desaulniers <nick.desaulniers+lkml@...il.com>,
linux-kernel@...r.kernel.org, llvm@...ts.linux.dev
Subject: Re: [PATCH 1/4] mm/mm_init: use deferred_init_memmap_chunk() in
deferred_grow_zone()
Hi, Mike
After going through the code again, I have some trivial thoughts to discuss
with you. If not right, please let me know.
On Mon, Aug 18, 2025 at 09:46:12AM +0300, Mike Rapoport wrote:
[...]
> bool __init deferred_grow_zone(struct zone *zone, unsigned int order)
> {
>- unsigned long nr_pages_needed = ALIGN(1 << order, PAGES_PER_SECTION);
>+ unsigned long nr_pages_needed = SECTION_ALIGN_UP(1 << order);
> pg_data_t *pgdat = zone->zone_pgdat;
> unsigned long first_deferred_pfn = pgdat->first_deferred_pfn;
> unsigned long spfn, epfn, flags;
> unsigned long nr_pages = 0;
>- u64 i = 0;
>
> /* Only the last zone may have deferred pages */
> if (zone_end_pfn(zone) != pgdat_end_pfn(pgdat))
>@@ -2262,37 +2272,26 @@ bool __init deferred_grow_zone(struct zone *zone, unsigned int order)
> return true;
> }
In the file above this line, there is a compare between first_deferred_pfn and
its original value after grab pgdat_resize_lock.
I am thinking to compare first_deferred_pfn with ULONG_MAX, as it compared in
deferred_init_memmap(). This indicate this zone has already been initialized
totally.
Current code guard this by spfn < zone_end_pfn(zone). Maybe a check ahead
would be more clear?
>
>- /* If the zone is empty somebody else may have cleared out the zone */
>- if (!deferred_init_mem_pfn_range_in_zone(&i, zone, &spfn, &epfn,
>- first_deferred_pfn)) {
>- pgdat->first_deferred_pfn = ULONG_MAX;
>- pgdat_resize_unlock(pgdat, &flags);
>- /* Retry only once. */
>- return first_deferred_pfn != ULONG_MAX;
>+ /*
>+ * Initialize at least nr_pages_needed in section chunks.
>+ * If a section has less free memory than nr_pages_needed, the next
>+ * section will be also initalized.
>+ * Note, that it still does not guarantee that allocation of order can
>+ * be satisfied if the sections are fragmented because of memblock
>+ * allocations.
>+ */
>+ for (spfn = first_deferred_pfn, epfn = SECTION_ALIGN_UP(spfn + 1);
I am expecting first_deferred_pfn is section aligned. So epfn += PAGES_PER_SECTION
is fine?
Maybe I missed something.
>+ nr_pages < nr_pages_needed && spfn < zone_end_pfn(zone);
>+ spfn = epfn, epfn += PAGES_PER_SECTION) {
>+ nr_pages += deferred_init_memmap_chunk(spfn, epfn, zone);
> }
>
> /*
>- * Initialize and free pages in MAX_PAGE_ORDER sized increments so
>- * that we can avoid introducing any issues with the buddy
>- * allocator.
>+ * There were no pages to initialize and free which means the zone's
>+ * memory map is completely initialized.
> */
>- while (spfn < epfn) {
>- /* update our first deferred PFN for this section */
>- first_deferred_pfn = spfn;
>-
>- nr_pages += deferred_init_maxorder(&i, zone, &spfn, &epfn);
>- touch_nmi_watchdog();
>-
>- /* We should only stop along section boundaries */
>- if ((first_deferred_pfn ^ spfn) < PAGES_PER_SECTION)
>- continue;
>-
>- /* If our quota has been met we can stop here */
>- if (nr_pages >= nr_pages_needed)
>- break;
>- }
>+ pgdat->first_deferred_pfn = nr_pages ? spfn : ULONG_MAX;
If we come here because spfn >= zone_end_pfn(zone), first_deferred_pfn is left
a "valid" value and deferred_init_memmap() will try to do its job. But
actually nothing left to initialize.
For this case, I suggest to set it ULONG_MAX too. But this is really corner
case.
>
>- pgdat->first_deferred_pfn = spfn;
> pgdat_resize_unlock(pgdat, &flags);
>
> return nr_pages > 0;
>--
>2.50.1
>
--
Wei Yang
Help you, Help me
Powered by blists - more mailing lists