[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <400b6d4e-bf10-4b89-bcbe-2375b1972220@arm.com>
Date: Wed, 25 Jun 2025 12:08:59 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Dev Jain <dev.jain@....com>, akpm@...ux-foundation.org, david@...hat.com,
catalin.marinas@....com, will@...nel.org
Cc: lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com, vbabka@...e.cz,
rppt@...nel.org, surenb@...gle.com, mhocko@...e.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, suzuki.poulose@....com, steven.price@....com,
gshan@...hat.com, linux-arm-kernel@...ts.infradead.org,
yang@...amperecomputing.com, anshuman.khandual@....com
Subject: Re: [PATCH v3 2/2] arm64: pageattr: Enable huge-vmalloc permission
change
On 13/06/2025 14:43, Dev Jain wrote:
> Commit fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when changing
> permissions for vmalloc_huge mappings") disallowed changing permissions
> for vmalloc-huge mappings. The motivation of this was to enforce an API
> requirement and explicitly tell the caller that it is unsafe to change
> permissions for block mappings since splitting may be required, which
> cannot be handled safely on an arm64 system in absence of BBML2.
>
> This patch effectively partially reverts this commit, since patch 1
> will now enable permission changes on kernel block mappings, thus,
> through change_memory_common(), enable permission changes for vmalloc-huge
> mappings. Any caller "misusing" the API, in the sense, calling it for
> a partial block mapping, will receive an error code of -EINVAL via
> the pagewalk callbacks, thus reverting to the behaviour of the API
> itself returning -EINVAL, through apply_to_page_range returning -EINVAL
> in case of block mappings, the difference now being, the -EINVAL is
> restricted to playing permission games on partial block mappings
> courtesy of patch 1.
>
> Signed-off-by: Dev Jain <dev.jain@....com>
> ---
> arch/arm64/mm/pageattr.c | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index cfc5279f27a2..66676f7f432a 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -195,8 +195,6 @@ static int change_memory_common(unsigned long addr, int numpages,
> * we are operating on does not result in such splitting.
> *
> * Let's restrict ourselves to mappings created by vmalloc (or vmap).
> - * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page
> - * mappings are updated and splitting is never needed.
> *
> * So check whether the [addr, addr + size) interval is entirely
> * covered by precisely one VM area that has the VM_ALLOC flag set.
> @@ -204,7 +202,7 @@ static int change_memory_common(unsigned long addr, int numpages,
> area = find_vm_area((void *)addr);
> if (!area ||
> end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
> - ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
> + !(area->flags & VM_ALLOC))
> return -EINVAL;
>
> if (!numpages)
I'd be inclined to leave this restriction in place for now. It is not useful
until we have context of the full vmalloc-huge-by-default series, I don't think?
Powered by blists - more mailing lists