lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7b3c83c1-bdf1-4778-948f-223ef0bce2a0@csgroup.eu>
Date: Fri, 5 Sep 2025 07:07:55 +0200
From: Christophe Leroy <christophe.leroy@...roup.eu>
To: "Ritesh Harjani (IBM)" <ritesh.list@...il.com>,
 Andrew Donnellan <ajd@...ux.ibm.com>, Michael Ellerman <mpe@...erman.id.au>,
 Nicholas Piggin <npiggin@...il.com>,
 Madhavan Srinivasan <maddy@...ux.ibm.com>
Cc: linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
 Erhard Furtner <erhard_f@...lbox.org>
Subject: Re: [PATCH] powerpc/32: Remove PAGE_KERNEL_TEXT to fix startup
 failure



Le 05/09/2025 à 05:55, Ritesh Harjani a écrit :
> Christophe Leroy <christophe.leroy@...roup.eu> writes:
> 
>> PAGE_KERNEL_TEXT is an old macro that is used to tell kernel whether
>> kernel text has to be mapped read-only or read-write based on build
>> time options.
>>
>> But nowadays, with functionnalities like jump_labels, static links,
>> etc ... more only less all kernels need to be read-write at some
>> point, and some combinations of configs failed to work due to
>> innacurate setting of PAGE_KERNEL_TEXT. On the other hand, today
>> we have CONFIG_STRICT_KERNEL_RWX which implements a more controlled
>> access to kernel modifications.
>>
>> Instead of trying to keep PAGE_KERNEL_TEXT accurate with all
>> possible options that may imply kernel text modification, always
>> set kernel text read-write at startup and rely on
>> CONFIG_STRICT_KERNEL_RWX to provide accurate protection.
>>
>> Reported-by: Erhard Furtner <erhard_f@...lbox.org>
>> Closes: https://lore.kernel.org/all/342b4120-911c-4723-82ec-d8c9b03a8aef@mailbox.org/
>> Signed-off-by: Christophe Leroy <christophe.leroy@...roup.eu>
>> ---
>>   arch/powerpc/include/asm/pgtable.h | 12 ------------
>>   arch/powerpc/mm/book3s32/mmu.c     |  4 ++--
>>   arch/powerpc/mm/pgtable_32.c       |  2 +-
>>   3 files changed, 3 insertions(+), 15 deletions(-)
>>
> 
> AFAIU - mmu_mark_initmem_nx gets called during kernel_init() which is
> way after static call initialization correct? i.e.
> 
> start_kernel
>    ...
>    jump_label_init()
>    static_call_init()
>    ...
>    ...
>    rest_init()      /* Do the rest non-__init'ed, we're now alive */
>      kernel_init()
>        free_initmem() -> mark_initmem_nx() -> __mark_initmem_nx -> mmu_mark_initmem_nx()
>        mark_readonly()
>          if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX) && rodata_enabled) {
>             jump_label_init_ro()
>             mark_rodata_ro() -> ....
>             ...
>          ...
> 
> Then I guess we mainly only need __mapin_ram_chunk() to be PAGE_KERNEL_X (RWX)
> instead of PAGE_KERNEL_TEXT (ROX), isn't it?
> 
> Let me quickly validate it...
> ...Ok, so I was able to get just this diff to be working.
> 
> Thoughts?

setibat() doesn't take into account whether it is RO or RW. Only X or NX 
is taken into account, so it doesn't matter whether it is X or ROX.

Then allthough you are right in principle, once the PAGE_KERNEL_TEXT is 
removed from __mapin_ram_chunk() it becomes completely useless, so 
better get rid of PAGE_KERNEL_TEXT completely.

> 
> diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
> index 15276068f657..0c9ef705803e 100644
> --- a/arch/powerpc/mm/pgtable_32.c
> +++ b/arch/powerpc/mm/pgtable_32.c
> @@ -104,7 +104,7 @@ static void __init __mapin_ram_chunk(unsigned long offset, unsigned long top)
>          p = memstart_addr + s;
>          for (; s < top; s += PAGE_SIZE) {
>                  ktext = core_kernel_text(v);
> -               map_kernel_page(v, p, ktext ? PAGE_KERNEL_TEXT : PAGE_KERNEL);
> +               map_kernel_page(v, p, ktext ? PAGE_KERNEL_X : PAGE_KERNEL);
>                  v += PAGE_SIZE;
>                  p += PAGE_SIZE;
>          }
> 
> -ritesh
> 
> 
> 
>> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
>> index 93d77ad5a92f..d8f944a5a037 100644
>> --- a/arch/powerpc/include/asm/pgtable.h
>> +++ b/arch/powerpc/include/asm/pgtable.h
>> @@ -20,18 +20,6 @@ struct mm_struct;
>>   #include <asm/nohash/pgtable.h>
>>   #endif /* !CONFIG_PPC_BOOK3S */
>>   
>> -/*
>> - * Protection used for kernel text. We want the debuggers to be able to
>> - * set breakpoints anywhere, so don't write protect the kernel text
>> - * on platforms where such control is possible.
>> - */
>> -#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) || \
>> -	defined(CONFIG_KPROBES) || defined(CONFIG_DYNAMIC_FTRACE)
>> -#define PAGE_KERNEL_TEXT	PAGE_KERNEL_X
>> -#else
>> -#define PAGE_KERNEL_TEXT	PAGE_KERNEL_ROX
>> -#endif
>> -
>>   /* Make modules code happy. We don't set RO yet */
>>   #define PAGE_KERNEL_EXEC	PAGE_KERNEL_X
>>   
>> diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
>> index be9c4106e22f..c42ecdf94e48 100644
>> --- a/arch/powerpc/mm/book3s32/mmu.c
>> +++ b/arch/powerpc/mm/book3s32/mmu.c
>> @@ -204,7 +204,7 @@ int mmu_mark_initmem_nx(void)
>>   
>>   	for (i = 0; i < nb - 1 && base < top;) {
>>   		size = bat_block_size(base, top);
>> -		setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_TEXT);
>> +		setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_X);
>>   		base += size;
>>   	}
>>   	if (base < top) {
>> @@ -215,7 +215,7 @@ int mmu_mark_initmem_nx(void)
>>   				pr_warn("Some RW data is getting mapped X. "
>>   					"Adjust CONFIG_DATA_SHIFT to avoid that.\n");
>>   		}
>> -		setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_TEXT);
>> +		setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_X);
>>   		base += size;
>>   	}
>>   	for (; i < nb; i++)
>> diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
>> index 15276068f657..0c9ef705803e 100644
>> --- a/arch/powerpc/mm/pgtable_32.c
>> +++ b/arch/powerpc/mm/pgtable_32.c
>> @@ -104,7 +104,7 @@ static void __init __mapin_ram_chunk(unsigned long offset, unsigned long top)
>>   	p = memstart_addr + s;
>>   	for (; s < top; s += PAGE_SIZE) {
>>   		ktext = core_kernel_text(v);
>> -		map_kernel_page(v, p, ktext ? PAGE_KERNEL_TEXT : PAGE_KERNEL);
>> +		map_kernel_page(v, p, ktext ? PAGE_KERNEL_X : PAGE_KERNEL);
>>   		v += PAGE_SIZE;
>>   		p += PAGE_SIZE;
>>   	}
>> -- 
>> 2.49.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ