[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d0f2df03-21da-473f-ab89-64a964c18f08@arm.com>
Date: Tue, 27 Jan 2026 10:21:36 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Ard Biesheuvel <ardb+git@...gle.com>, linux-kernel@...r.kernel.org
Cc: linux-arm-kernel@...ts.infradead.org, will@...nel.org,
catalin.marinas@....com, mark.rutland@....com,
Ard Biesheuvel <ardb@...nel.org>,
Anshuman Khandual <anshuman.khandual@....com>,
Liz Prucka <lizprucka@...gle.com>, Seth Jenkins <sethjenkins@...gle.com>,
Kees Cook <kees@...nel.org>, linux-hardening@...r.kernel.org
Subject: Re: [PATCH v2 08/10] arm64: mm: Don't abuse memblock NOMAP to check
for overlaps
On 26/01/2026 09:26, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@...nel.org>
>
> Now that the DRAM mapping routines respect existing table mappings and
> contiguous block and page mappings, it is no longer needed to fiddle
> with the memblock tables to set and clear the NOMAP attribute. Instead,
> map the kernel text and rodata alias first, avoiding contiguous
> mappings, so that they will not be added later when mapping the
> memblocks.
Should we do something similar for kfence? Currently we have
arm64_kfence_alloc_pool() which marks some memory NOMAP then
arm64_kfence_map_pool() which PTE-maps it and clears NOMAP. Presumably we could
rationalize into a single function that does it all, prior to mapping the bulk
of the linear map?
>
> Signed-off-by: Ard Biesheuvel <ardb@...nel.org>
> ---
> arch/arm64/mm/mmu.c | 27 ++++++++------------
> 1 file changed, 10 insertions(+), 17 deletions(-)
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 80587cd47ce7..18415d4743bf 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1149,12 +1149,17 @@ static void __init map_mem(void)
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>
> /*
> - * Take care not to create a writable alias for the
> - * read-only text and rodata sections of the kernel image.
> - * So temporarily mark them as NOMAP to skip mappings in
> - * the following for-loop
> + * Map the linear alias of the [_text, __init_begin) interval
> + * as non-executable now, and remove the write permission in
> + * mark_linear_text_alias_ro() above (which will be called after
> + * alternative patching has completed). This makes the contents
> + * of the region accessible to subsystems such as hibernate,
> + * but protects it from inadvertent modification or execution.
> + * Note that contiguous mappings cannot be remapped in this way,
> + * so we should avoid them here.
> */
> - memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
> + __map_memblock(kernel_start, kernel_end, PAGE_KERNEL,
> + flags | NO_CONT_MAPPINGS);
So the reason to disallow cont mappings is because we need to modify the
permissions later? It _is_ safe to change permissions on a live contiguous
mapping in this way. That was clarified in the architecture a couple of years
back and we rely on it for contpte_wrprotect_ptes(); see comment there.
I think we could relax this?
Thanks,
Ryan
>
> /* map all the memory banks */
> for_each_mem_range(i, &start, &end) {
> @@ -1167,18 +1172,6 @@ static void __init map_mem(void)
> flags);
> }
>
> - /*
> - * Map the linear alias of the [_text, __init_begin) interval
> - * as non-executable now, and remove the write permission in
> - * mark_linear_text_alias_ro() below (which will be called after
> - * alternative patching has completed). This makes the contents
> - * of the region accessible to subsystems such as hibernate,
> - * but protects it from inadvertent modification or execution.
> - * Note that contiguous mappings cannot be remapped in this way,
> - * so we should avoid them here.
> - */
> - __map_memblock(kernel_start, kernel_end, PAGE_KERNEL, NO_CONT_MAPPINGS);
> - memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
> arm64_kfence_map_pool(early_kfence_pool);
> }
>
Powered by blists - more mailing lists