[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8918c4b9-5156-4b9a-83cb-d1d4856ae48d@mailbox.org>
Date: Fri, 5 Sep 2025 16:30:02 +0200
From: Erhard Furtner <erhard_f@...lbox.org>
To: Christophe Leroy <christophe.leroy@...roup.eu>,
Andrew Donnellan <ajd@...ux.ibm.com>, Michael Ellerman <mpe@...erman.id.au>,
Nicholas Piggin <npiggin@...il.com>,
Madhavan Srinivasan <maddy@...ux.ibm.com>
Cc: linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH] powerpc/32: Remove PAGE_KERNEL_TEXT to fix startup
failure
On 9/4/25 18:33, Christophe Leroy wrote:
> PAGE_KERNEL_TEXT is an old macro that is used to tell kernel whether
> kernel text has to be mapped read-only or read-write based on build
> time options.
>
> But nowadays, with functionnalities like jump_labels, static links,
> etc ... more only less all kernels need to be read-write at some
> point, and some combinations of configs failed to work due to
> innacurate setting of PAGE_KERNEL_TEXT. On the other hand, today
> we have CONFIG_STRICT_KERNEL_RWX which implements a more controlled
> access to kernel modifications.
>
> Instead of trying to keep PAGE_KERNEL_TEXT accurate with all
> possible options that may imply kernel text modification, always
> set kernel text read-write at startup and rely on
> CONFIG_STRICT_KERNEL_RWX to provide accurate protection.
I can confirm your patch fixes the startup failure for my G4 .config.
Thanks!
Regards,
Erhard
> Reported-by: Erhard Furtner <erhard_f@...lbox.org>
> Closes: https://lore.kernel.org/all/342b4120-911c-4723-82ec-d8c9b03a8aef@mailbox.org/
> Signed-off-by: Christophe Leroy <christophe.leroy@...roup.eu>
> ---
> arch/powerpc/include/asm/pgtable.h | 12 ------------
> arch/powerpc/mm/book3s32/mmu.c | 4 ++--
> arch/powerpc/mm/pgtable_32.c | 2 +-
> 3 files changed, 3 insertions(+), 15 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index 93d77ad5a92f..d8f944a5a037 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -20,18 +20,6 @@ struct mm_struct;
> #include <asm/nohash/pgtable.h>
> #endif /* !CONFIG_PPC_BOOK3S */
>
> -/*
> - * Protection used for kernel text. We want the debuggers to be able to
> - * set breakpoints anywhere, so don't write protect the kernel text
> - * on platforms where such control is possible.
> - */
> -#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) || \
> - defined(CONFIG_KPROBES) || defined(CONFIG_DYNAMIC_FTRACE)
> -#define PAGE_KERNEL_TEXT PAGE_KERNEL_X
> -#else
> -#define PAGE_KERNEL_TEXT PAGE_KERNEL_ROX
> -#endif
> -
> /* Make modules code happy. We don't set RO yet */
> #define PAGE_KERNEL_EXEC PAGE_KERNEL_X
>
> diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
> index be9c4106e22f..c42ecdf94e48 100644
> --- a/arch/powerpc/mm/book3s32/mmu.c
> +++ b/arch/powerpc/mm/book3s32/mmu.c
> @@ -204,7 +204,7 @@ int mmu_mark_initmem_nx(void)
>
> for (i = 0; i < nb - 1 && base < top;) {
> size = bat_block_size(base, top);
> - setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_TEXT);
> + setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_X);
> base += size;
> }
> if (base < top) {
> @@ -215,7 +215,7 @@ int mmu_mark_initmem_nx(void)
> pr_warn("Some RW data is getting mapped X. "
> "Adjust CONFIG_DATA_SHIFT to avoid that.\n");
> }
> - setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_TEXT);
> + setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_X);
> base += size;
> }
> for (; i < nb; i++)
> diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
> index 15276068f657..0c9ef705803e 100644
> --- a/arch/powerpc/mm/pgtable_32.c
> +++ b/arch/powerpc/mm/pgtable_32.c
> @@ -104,7 +104,7 @@ static void __init __mapin_ram_chunk(unsigned long offset, unsigned long top)
> p = memstart_addr + s;
> for (; s < top; s += PAGE_SIZE) {
> ktext = core_kernel_text(v);
> - map_kernel_page(v, p, ktext ? PAGE_KERNEL_TEXT : PAGE_KERNEL);
> + map_kernel_page(v, p, ktext ? PAGE_KERNEL_X : PAGE_KERNEL);
> v += PAGE_SIZE;
> p += PAGE_SIZE;
> }
Powered by blists - more mailing lists