[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3aa3558d-cfc6-43a1-8c73-9b01ed1e2b3e@os.amperecomputing.com>
Date: Wed, 12 Nov 2025 14:27:49 -0800
From: Yang Shi <yang@...amperecomputing.com>
To: Dev Jain <dev.jain@....com>, catalin.marinas@....com, will@...nel.org
Cc: ryan.roberts@....com, rppt@...nel.org, shijie@...amperecomputing.com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] arm64/pageattr: Propagate return value from
__change_memory_common
On 11/11/25 10:27 PM, Dev Jain wrote:
> The rodata=on security measure requires that any code path which does
> vmalloc -> set_memory_ro/set_memory_rox must protect the linear map alias
> too. Therefore, if such a call fails, we must abort set_memory_* and caller
> must take appropriate action; currently we are suppressing the error, and
> there is a real chance of such an error arising post commit a166563e7ec3
> ("arm64: mm: support large block mapping when rodata=full"). Therefore,
> propagate any error to the caller.
>
> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> Signed-off-by: Dev Jain <dev.jain@....com>
Thanks for fixing this. My old patches propagated the return value of
splitting page table which was called outside __change_memory_common().
But it was missed in the final patches.
The fix looks good to me. Reviewed-by: Yang Shi
<yang@...amperecomputing.com>
Yang
> ---
> v1 of this patch: https://lore.kernel.org/all/20251103061306.82034-1-dev.jain@arm.com/
> I have dropped stable since no real chance of failure was there.
>
> arch/arm64/mm/pageattr.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 5135f2d66958..b4ea86cd3a71 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages,
> unsigned long size = PAGE_SIZE * numpages;
> unsigned long end = start + size;
> struct vm_struct *area;
> + int ret;
> int i;
>
> if (!PAGE_ALIGNED(addr)) {
> @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages,
> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
> pgprot_val(clear_mask) == PTE_RDONLY)) {
> for (i = 0; i < area->nr_pages; i++) {
> - __change_memory_common((u64)page_address(area->pages[i]),
> + ret = __change_memory_common((u64)page_address(area->pages[i]),
> PAGE_SIZE, set_mask, clear_mask);
> + if (ret)
> + return ret;
> }
> }
>
Powered by blists - more mailing lists