[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a6d3d2d3-32b5-4784-98d9-1b42a0ef4616@os.amperecomputing.com>
Date: Mon, 10 Nov 2025 15:08:37 -0800
From: Yang Shi <yang@...amperecomputing.com>
To: ryan.roberts@....com, dev.jain@....com, cl@...two.org,
catalin.marinas@....com, will@...nel.org
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [v2 PATCH] arm64: mm: make linear mapping permission update more
robust for patial range
Hi folks,
Gently ping...
It is not an urgent fix, either 6.18 or 6.19 is fine.
Thanks,
Yang
On 10/23/25 1:44 PM, Yang Shi wrote:
> The commit fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when changing
> permissions for vmalloc_huge mappings") made permission update for
> partial range more robust. But the linear mapping permission update
> still assumes update the whole range by iterating from the first page
> all the way to the last page of the area.
>
> Make it more robust by updating the linear mapping permission from the
> page mapped by start address, and update the number of numpages.
>
> Reviewed-by: Ryan Roberts <ryan.roberts@....com>
> Reviewed-by: Dev Jain <dev.jain@....com>
> Signed-off-by: Yang Shi <yang@...amperecomputing.com>
> ---
> v2: * Dropped the fixes tag per Ryan and Dev
> * Simplified the loop per Dev
> * Collected R-bs
>
> arch/arm64/mm/pageattr.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 5135f2d66958..08ac96b9f846 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -148,7 +148,6 @@ static int change_memory_common(unsigned long addr, int numpages,
> unsigned long size = PAGE_SIZE * numpages;
> unsigned long end = start + size;
> struct vm_struct *area;
> - int i;
>
> if (!PAGE_ALIGNED(addr)) {
> start &= PAGE_MASK;
> @@ -184,8 +183,9 @@ static int change_memory_common(unsigned long addr, int numpages,
> */
> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
> pgprot_val(clear_mask) == PTE_RDONLY)) {
> - for (i = 0; i < area->nr_pages; i++) {
> - __change_memory_common((u64)page_address(area->pages[i]),
> + unsigned long idx = (start - (unsigned long)area->addr) >> PAGE_SHIFT;
> + for (; numpages; idx++, numpages--) {
> + __change_memory_common((u64)page_address(area->pages[idx]),
> PAGE_SIZE, set_mask, clear_mask);
> }
> }
Powered by blists - more mailing lists