[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aW6hISBu3hNYS771@e129823.arm.com>
Date: Mon, 19 Jan 2026 21:24:49 +0000
From: Yeoreum Yun <yeoreum.yun@....com>
To: Will Deacon <will@...nel.org>
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-rt-devel@...ts.linux.dev, catalin.marinas@....com,
ryan.roberts@....com, akpm@...ux-oundation.org, david@...nel.org,
kevin.brodsky@....com, quic_zhenhuah@...cinc.com, dev.jain@....com,
yang@...amperecomputing.com, chaitanyas.prakash@....com,
bigeasy@...utronix.de, clrkwllms@...nel.org, rostedt@...dmis.org,
lorenzo.stoakes@...cle.com, ardb@...nel.org, jackmanb@...gle.com,
vbabka@...e.cz, mhocko@...e.com
Subject: Re: [PATCH v5 2/3] arm64: mmu: avoid allocating pages while
splitting the linear mapping
Hi Will,
> On Mon, Jan 05, 2026 at 08:23:27PM +0000, Yeoreum Yun wrote:
> > +static int __init linear_map_prealloc_split_pgtables(void)
> > +{
> > + int ret, i;
> > + unsigned long lstart = _PAGE_OFFSET(vabits_actual);
> > + unsigned long lend = PAGE_END;
> > + unsigned long kstart = (unsigned long)lm_alias(_stext);
> > + unsigned long kend = (unsigned long)lm_alias(__init_begin);
> > +
> > + const struct mm_walk_ops collect_to_split_ops = {
> > + .pud_entry = collect_to_split_pud_entry,
> > + .pmd_entry = collect_to_split_pmd_entry
> > + };
>
> Why do we need to rewalk the page-table here instead of collating the
> number of block mappings we put down when creating the linear map in
> the first place?
First, linear alias of the [_text, __init_begin) is not a target for
the split and it also seems strange to me to add code inside alloc_init_XXX()
that both checks an address range and counts to get the number of block mappings.
Second, for a future feature,
I hope to add some code to split "specfic" area to be spilt e.x)
to set a specific pkey for specific area.
In this case, it's useful to rewalk the page-table with the specific
range to get the number of block mapping.
>
> > + split_pgtables_idx = 0;
> > + split_pgtables_count = 0;
> > +
> > + ret = walk_kernel_page_table_range_lockless(lstart, kstart,
> > + &collect_to_split_ops,
> > + NULL, NULL);
> > + if (!ret)
> > + ret = walk_kernel_page_table_range_lockless(kend, lend,
> > + &collect_to_split_ops,
> > + NULL, NULL);
> > + if (ret || !split_pgtables_count)
> > + goto error;
> > +
> > + ret = -ENOMEM;
> > +
> > + split_pgtables = kvmalloc(split_pgtables_count * sizeof(struct ptdesc *),
> > + GFP_KERNEL | __GFP_ZERO);
> > + if (!split_pgtables)
> > + goto error;
> > +
> > + for (i = 0; i < split_pgtables_count; i++) {
> > + /* The page table will be filled during splitting, so zeroing it is unnecessary. */
> > + split_pgtables[i] = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
> > + if (!split_pgtables[i])
> > + goto error;
>
> This looks potentially expensive on the boot path and only gets worse as
> the amount of memory grows. Maybe we should predicate this preallocation
> on preempt-rt?
Agree. then I'll apply pre-allocation with PREEMPT_RT only.
Thanks for your review.
--
Sincerely,
Yeoreum Yun
Powered by blists - more mailing lists