lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z-cnmklGUojMzsUF@kernel.org>
Date: Sat, 29 Mar 2025 01:50:02 +0300
From: Mike Rapoport <rppt@...nel.org>
To: Dev Jain <dev.jain@....com>
Cc: catalin.marinas@....com, will@...nel.org, gshan@...hat.com,
	steven.price@....com, suzuki.poulose@....com,
	tianyaxiong@...inos.cn, ardb@...nel.org, david@...hat.com,
	ryan.roberts@....com, urezki@...il.com,
	linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64: pageattr: Explicitly bail out when changing
 permissions for vmalloc_huge mappings

On Fri, Mar 28, 2025 at 11:51:03AM +0530, Dev Jain wrote:
> arm64 uses apply_to_page_range to change permissions for kernel VA mappings,

                                                     for vmalloc mappings ^

arm64 does not allow changing permissions to any VA address right now.

> which does not support changing permissions for leaf mappings. This function
> will change permissions until it encounters a leaf mapping, and will bail
> out. To avoid this partial change, explicitly disallow changing permissions
> for VM_ALLOW_HUGE_VMAP mappings.
> 
> Signed-off-by: Dev Jain <dev.jain@....com>
> ---
>  arch/arm64/mm/pageattr.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 39fd1f7ff02a..8337c88eec69 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -96,7 +96,7 @@ static int change_memory_common(unsigned long addr, int numpages,
>  	 * we are operating on does not result in such splitting.
>  	 *
>  	 * Let's restrict ourselves to mappings created by vmalloc (or vmap).
> -	 * Those are guaranteed to consist entirely of page mappings, and
> +	 * Disallow VM_ALLOW_HUGE_VMAP vmalloc mappings so that

I'd keep mention of page mappings in the comment, e.g

	* Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page
	* mappings are updated and splitting is never needed.

With this and changelog updates Ryan asked for

Reviewed-by: Mike Rapoport (Microsoft) <rppt@...nel.org>


>  	 * splitting is never needed.
>  	 *
>  	 * So check whether the [addr, addr + size) interval is entirely
> @@ -105,7 +105,7 @@ static int change_memory_common(unsigned long addr, int numpages,
>  	area = find_vm_area((void *)addr);
>  	if (!area ||
>  	    end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
> -	    !(area->flags & VM_ALLOC))
> +	    ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
>  		return -EINVAL;
>  
>  	if (!numpages)
> -- 
> 2.30.2
> 

-- 
Sincerely yours,
Mike.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ