[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJrd-UuzTh-0Ee9+rMRES9onP_EkvJS-VpPP66J4M4n0Ku0ZWA@mail.gmail.com>
Date: Tue, 17 May 2022 20:38:18 +0900
From: Jaewon Kim <jaewon31.kim@...il.com>
To: Mike Rapoport <rppt@...ux.ibm.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Jaewon Kim <jaewon31.kim@...sung.com>,
Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [RFC PATCH] page_ext: create page extension for all memblock
memory regions
Hello Mike Rapoport
Thank you for your comment.
Oh really? Could you point out the code or the commit regarding 'all
struct pages in any section should be valid and
properly initialized' ?
Actually I am using v5.10 based source tree on an arm64 device.
I tried to look up and found the following commit in v5.16-rc1, did
you mean this?
3de360c3fdb3 arm64/mm: drop HAVE_ARCH_PFN_VALID
I guess memblock_is_memory code in pfn_valid in arch/arm64/mm/init.c, v5.10
might affect the page_ext_init.
Thank you
2022년 5월 17일 (화) 오후 5:25, Mike Rapoport <rppt@...ux.ibm.com>님이 작성:
>
> On Mon, May 16, 2022 at 05:33:21PM -0700, Andrew Morton wrote:
> > On Mon, 9 May 2022 16:43:30 +0900 Jaewon Kim <jaewon31.kim@...sung.com> wrote:
> >
> > > The page extension can be prepared for each section. But if the first
> > > page is not valid, the page extension for the section was not
> > > initialized though there were many other valid pages within the section.
>
> What do you mean by "first page [in a section] is not valid"?
> In recent kernels all struct pages in any section should be valid and
> properly initialized.
>
> > > To support the page extension for all sections, refer to memblock memory
> > > regions. If the page is valid use the nid from pfn_to_nid, otherwise use
> > > the previous nid.
> > >
> > > Also this pagech changed log to include total sections and a section
> > > size.
> > >
> > > i.e.
> > > allocated 100663296 bytes of page_ext for 64 sections (1 section : 0x8000000)
> >
> > Cc Joonsoo, who wrote this code.
> > Cc Mike, for memblock.
> >
> > Thanks.
> >
> > >
> > > diff --git a/mm/page_ext.c b/mm/page_ext.c
> > > index 2e66d934d63f..506d58b36a1d 100644
> > > --- a/mm/page_ext.c
> > > +++ b/mm/page_ext.c
> > > @@ -381,41 +381,43 @@ static int __meminit page_ext_callback(struct notifier_block *self,
> > > void __init page_ext_init(void)
> > > {
> > > unsigned long pfn;
> > > - int nid;
> > > + int nid = 0;
> > > + struct memblock_region *rgn;
> > > + int nr_section = 0;
> > > + unsigned long next_section_pfn = 0;
> > >
> > > if (!invoke_need_callbacks())
> > > return;
> > >
> > > - for_each_node_state(nid, N_MEMORY) {
> > > + /*
> > > + * iterate each memblock memory region and do not skip a section having
> > > + * !pfn_valid(pfn)
> > > + */
> > > + for_each_mem_region(rgn) {
> > > unsigned long start_pfn, end_pfn;
> > >
> > > - start_pfn = node_start_pfn(nid);
> > > - end_pfn = node_end_pfn(nid);
> > > - /*
> > > - * start_pfn and end_pfn may not be aligned to SECTION and the
> > > - * page->flags of out of node pages are not initialized. So we
> > > - * scan [start_pfn, the biggest section's pfn < end_pfn) here.
> > > - */
> > > + start_pfn = (unsigned long)(rgn->base >> PAGE_SHIFT);
> > > + end_pfn = start_pfn + (unsigned long)(rgn->size >> PAGE_SHIFT);
> > > +
> > > + if (start_pfn < next_section_pfn)
> > > + start_pfn = next_section_pfn;
> > > +
> > > for (pfn = start_pfn; pfn < end_pfn;
> > > pfn = ALIGN(pfn + 1, PAGES_PER_SECTION)) {
> > >
> > > - if (!pfn_valid(pfn))
> > > - continue;
> > > - /*
> > > - * Nodes's pfns can be overlapping.
> > > - * We know some arch can have a nodes layout such as
> > > - * -------------pfn-------------->
> > > - * N0 | N1 | N2 | N0 | N1 | N2|....
> > > - */
> > > - if (pfn_to_nid(pfn) != nid)
> > > - continue;
> > > + if (pfn_valid(pfn))
> > > + nid = pfn_to_nid(pfn);
> > > + nr_section++;
> > > if (init_section_page_ext(pfn, nid))
> > > goto oom;
> > > cond_resched();
> > > }
> > > + next_section_pfn = pfn;
> > > }
> > > +
> > > hotplug_memory_notifier(page_ext_callback, 0);
> > > - pr_info("allocated %ld bytes of page_ext\n", total_usage);
> > > + pr_info("allocated %ld bytes of page_ext for %d sections (1 section : 0x%x)\n",
> > > + total_usage, nr_section, (1 << SECTION_SIZE_BITS));
> > > invoke_init_callbacks();
> > > return;
> > >
> > > --
> > > 2.17.1
> > >
>
> --
> Sincerely yours,
> Mike.
Powered by blists - more mailing lists