lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aW5p1ylQBNXTEmmq@willie-the-truck>
Date: Mon, 19 Jan 2026 17:28:55 +0000
From: Will Deacon <will@...nel.org>
To: Yeoreum Yun <yeoreum.yun@....com>
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
	linux-rt-devel@...ts.linux.dev, catalin.marinas@....com,
	ryan.roberts@....com, akpm@...ux-oundation.org, david@...nel.org,
	kevin.brodsky@....com, quic_zhenhuah@...cinc.com, dev.jain@....com,
	yang@...amperecomputing.com, chaitanyas.prakash@....com,
	bigeasy@...utronix.de, clrkwllms@...nel.org, rostedt@...dmis.org,
	lorenzo.stoakes@...cle.com, ardb@...nel.org, jackmanb@...gle.com,
	vbabka@...e.cz, mhocko@...e.com
Subject: Re: [PATCH v5 2/3] arm64: mmu: avoid allocating pages while
 splitting the linear mapping

On Mon, Jan 05, 2026 at 08:23:27PM +0000, Yeoreum Yun wrote:
> +static int __init linear_map_prealloc_split_pgtables(void)
> +{
> +	int ret, i;
> +	unsigned long lstart = _PAGE_OFFSET(vabits_actual);
> +	unsigned long lend = PAGE_END;
> +	unsigned long kstart = (unsigned long)lm_alias(_stext);
> +	unsigned long kend = (unsigned long)lm_alias(__init_begin);
> +
> +	const struct mm_walk_ops collect_to_split_ops = {
> +		.pud_entry	= collect_to_split_pud_entry,
> +		.pmd_entry	= collect_to_split_pmd_entry
> +	};

Why do we need to rewalk the page-table here instead of collating the
number of block mappings we put down when creating the linear map in
the first place?

> +	split_pgtables_idx = 0;
> +	split_pgtables_count = 0;
> +
> +	ret = walk_kernel_page_table_range_lockless(lstart, kstart,
> +						    &collect_to_split_ops,
> +						    NULL, NULL);
> +	if (!ret)
> +		ret = walk_kernel_page_table_range_lockless(kend, lend,
> +							    &collect_to_split_ops,
> +							    NULL, NULL);
> +	if (ret || !split_pgtables_count)
> +		goto error;
> +
> +	ret = -ENOMEM;
> +
> +	split_pgtables = kvmalloc(split_pgtables_count * sizeof(struct ptdesc *),
> +				  GFP_KERNEL | __GFP_ZERO);
> +	if (!split_pgtables)
> +		goto error;
> +
> +	for (i = 0; i < split_pgtables_count; i++) {
> +		/* The page table will be filled during splitting, so zeroing it is unnecessary. */
> +		split_pgtables[i] = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
> +		if (!split_pgtables[i])
> +			goto error;

This looks potentially expensive on the boot path and only gets worse as
the amount of memory grows. Maybe we should predicate this preallocation
on preempt-rt?

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ