lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <831e9e0c-a61b-40d8-a8d2-747b760ba6b3@os.amperecomputing.com>
Date: Thu, 9 Oct 2025 13:26:28 -0700
From: Yang Shi <yang@...amperecomputing.com>
To: Dev Jain <dev.jain@....com>, catalin.marinas@....com, will@...nel.org
Cc: gshan@...hat.com, rppt@...nel.org, steven.price@....com,
 suzuki.poulose@....com, tianyaxiong@...inos.cn, ardb@...nel.org,
 david@...hat.com, ryan.roberts@....com, urezki@...il.com,
 linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64: pageattr: Explicitly bail out when changing
 permissions for vmalloc_huge mappings



On 3/27/25 11:21 PM, Dev Jain wrote:
> arm64 uses apply_to_page_range to change permissions for kernel VA mappings,
> which does not support changing permissions for leaf mappings. This function
> will change permissions until it encounters a leaf mapping, and will bail
> out. To avoid this partial change, explicitly disallow changing permissions
> for VM_ALLOW_HUGE_VMAP mappings.
>
> Signed-off-by: Dev Jain <dev.jain@....com>
> ---
>   arch/arm64/mm/pageattr.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 39fd1f7ff02a..8337c88eec69 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -96,7 +96,7 @@ static int change_memory_common(unsigned long addr, int numpages,
>   	 * we are operating on does not result in such splitting.
>   	 *
>   	 * Let's restrict ourselves to mappings created by vmalloc (or vmap).
> -	 * Those are guaranteed to consist entirely of page mappings, and
> +	 * Disallow VM_ALLOW_HUGE_VMAP vmalloc mappings so that
>   	 * splitting is never needed.
>   	 *
>   	 * So check whether the [addr, addr + size) interval is entirely
> @@ -105,7 +105,7 @@ static int change_memory_common(unsigned long addr, int numpages,
>   	area = find_vm_area((void *)addr);
>   	if (!area ||
>   	    end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
> -	    !(area->flags & VM_ALLOC))
> +	    ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
>   		return -EINVAL;

I happened to find this patch when I was looking into fixing "splitting 
is never needed" comment to reflect the latest change with BBML2_NOABORT 
and tried to relax this restriction. I agree with the justification for 
this patch to make the code more robust for permission update on partial 
range. But the following linear mapping permission update code seems 
still assume partial range update never happens:

for (i = 0; i < area->nr_pages; i++) {

It iterates all pages for this vm area from the first page then update 
their permissions. So I think we should do the below to make it more 
robust to partial range update like this patch did:

--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -185,8 +185,9 @@ static int change_memory_common(unsigned long addr, 
int numpages,
          */
         if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
                             pgprot_val(clear_mask) == PTE_RDONLY)) {
-               for (i = 0; i < area->nr_pages; i++) {
-  __change_memory_common((u64)page_address(area->pages[i]),
+               unsigned long idx = (start - (unsigned long)area->addr) 
 >> PAGE_SHIFT;
+               for (i = 0; i < numpages; i++) {
+  __change_memory_common((u64)page_address(area->pages[idx++]),
                                                PAGE_SIZE, set_mask, 
clear_mask);
                 }
         }

Just build tested. Does it look reasonable?

Thanks,
Yang


>   
>   	if (!numpages)


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ