lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <663493ae-6b73-4c63-b306-66bcca17fab1@os.amperecomputing.com>
Date: Tue, 4 Nov 2025 08:00:31 -0800
From: Yang Shi <yang@...amperecomputing.com>
To: Ryan Roberts <ryan.roberts@....com>, catalin.marinas@....com,
 will@...nel.org
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64: kprobes: check the return value of
 set_memory_rox()



On 11/4/25 5:44 AM, Ryan Roberts wrote:
> On 04/11/2025 13:14, Ryan Roberts wrote:
>> On 03/11/2025 19:45, Yang Shi wrote:
>>> Since commit a166563e7ec3 ("arm64: mm: support large block mapping when
>>> rodata=full"), __change_memory_common has more chance to fail due to
>>> memory allocation fialure when splitting page table. So check the return
>>> value of set_memory_rox(), then bail out if it fails otherwise we may have
>>> RW memory mapping for kprobes insn page.
>>>
>>> Fixes: 195a1b7d8388 ("arm64: kprobes: call set_memory_rox() for kprobe page")
>>> Signed-off-by: Yang Shi <yang@...amperecomputing.com>
>> This patch looks correct so:
>>
>> Reviewed-by: Ryan Roberts <ryan.roberts@....com>

Thank you.

>>
>> but, I think I see an separate issue below...
>>
>>> ---
>>> I actually epxected 195a1b7d8388 ("arm64: kprobes: call set_memory_rox()
>>> for kprobe page") can be merged in 6.17-rcX, so I just restored it to
>>> before commit 10d5e97c1bf8 ("arm64: use PAGE_KERNEL_ROX directly in
>>> alloc_insn_page"), however it turned out to be merged in 6.18-rc1 and it
>>> is after commit a166563e7ec3 ("arm64: mm: support large block mapping when
>>> rodata=full"). So I made the fix tag point to it.
>>> And I don't think we need to backport this patch to pre-6.18.
>>>
>>>   arch/arm64/kernel/probes/kprobes.c | 5 ++++-
>>>   1 file changed, 4 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
>>> index 8ab6104a4883..43a0361a8bf0 100644
>>> --- a/arch/arm64/kernel/probes/kprobes.c
>>> +++ b/arch/arm64/kernel/probes/kprobes.c
>>> @@ -49,7 +49,10 @@ void *alloc_insn_page(void)
>>>   	addr = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
>>>   	if (!addr)
>>>   		return NULL;
>>> -	set_memory_rox((unsigned long)addr, 1);
>>> +	if (set_memory_rox((unsigned long)addr, 1)) {
>> How does x get cleared when freeing this memory? arm64's set_memory_x() sets
>> PTE_MAYBE_GP and clears PTE_PXN. The only function that will revert that is
>> set_memory_nx(). But that only gets called from module_enable_data_nx() (which I
>> don't think is applicable here) and execmem_force_rw() - but only if
>> CONFIG_ARCH_HAS_EXECMEM_ROX is enabled, which I don't think it is for arm64?
>>
>> So I think once we flip a page executable, it will be executable forever?
>>
>> Do we need to modify set_direct_map_default_noflush() to make the memory nx?
>> Then vm_reset_perms() will fix it up at vfree time?
>
> Dev just pointed this out to me. Panic over!

Aha, yes, it doesn't clear PXN at all.

Thanks,
Yang

>
> static int change_memory_common(unsigned long addr, int numpages,
> 				pgprot_t set_mask, pgprot_t clear_mask)
> {
> 	...
>
> 	/*
> 	 * If we are manipulating read-only permissions, apply the same
> 	 * change to the linear mapping of the pages that back this VM area.
> 	 */
> 	if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
> 			    pgprot_val(clear_mask) == PTE_RDONLY)) {
> 		for (i = 0; i < area->nr_pages; i++) {
> 			__change_memory_common(...);
> 		}
> 	}
>
> 	...
> }
>
>
>> Thanks,
>> Ryan
>>
>>> +		execmem_free(addr);
>>> +		return NULL;
>>> +	}
>>>   	return addr;
>>>   }
>>>   


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ