[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54db9fe1-7115-476e-b838-80aa68aabe7e@arm.com>
Date: Fri, 28 Mar 2025 14:39:08 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Dev Jain <dev.jain@....com>, catalin.marinas@....com, will@...nel.org
Cc: gshan@...hat.com, rppt@...nel.org, steven.price@....com,
suzuki.poulose@....com, tianyaxiong@...inos.cn, ardb@...nel.org,
david@...hat.com, urezki@...il.com, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64: pageattr: Explicitly bail out when changing
permissions for vmalloc_huge mappings
On 28/03/2025 02:21, Dev Jain wrote:
> arm64 uses apply_to_page_range to change permissions for kernel VA mappings,
> which does not support changing permissions for leaf mappings. This function
I think you mean "block" mappings here? A leaf mapping refers to a page table
entry that maps a piece of memory at any level in the pgtable (i.e. a present
entry that does not map a table).
A block mapping is an Arm ARM term used to mean a leaf mapping at a level other
than the last level (e.g. pmd, pud). A page mapping is an Arm ARM term used to
mean a leaf mapping at the last level (e.g. pte).
> will change permissions until it encounters a leaf mapping, and will bail
block mapping
> out. To avoid this partial change, explicitly disallow changing permissions
> for VM_ALLOW_HUGE_VMAP mappings.
It will also emit a warning. Since there are no reports of this triggering, it
implies that there are currently no cases of code doing a vmalloc_huge()
followed by partial permission change, at least on arm64 (I'm told BPF does do
this on x86 though). But this is a footgun waiting to go off, so let's detect it
early and avoid the possibility of permissions in an intermediate state. (It
might be worth wordsmithing this into the commit log).
>
> Signed-off-by: Dev Jain <dev.jain@....com>
With the commit log fixed up:
Reviewed-by: Ryan Roberts <ryan.roberts@....com>
> ---
> arch/arm64/mm/pageattr.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 39fd1f7ff02a..8337c88eec69 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -96,7 +96,7 @@ static int change_memory_common(unsigned long addr, int numpages,
> * we are operating on does not result in such splitting.
> *
> * Let's restrict ourselves to mappings created by vmalloc (or vmap).
> - * Those are guaranteed to consist entirely of page mappings, and
> + * Disallow VM_ALLOW_HUGE_VMAP vmalloc mappings so that
> * splitting is never needed.
> *
> * So check whether the [addr, addr + size) interval is entirely
> @@ -105,7 +105,7 @@ static int change_memory_common(unsigned long addr, int numpages,
> area = find_vm_area((void *)addr);
> if (!area ||
> end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
> - !(area->flags & VM_ALLOC))
> + ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
> return -EINVAL;
>
> if (!numpages)
Powered by blists - more mailing lists