[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c15c115a-b199-4933-a634-1f0679565e25@os.amperecomputing.com>
Date: Tue, 14 Oct 2025 13:15:41 -0700
From: Yang Shi <yang@...amperecomputing.com>
To: Ryan Roberts <ryan.roberts@....com>, dev.jain@....com, cl@...two.org,
catalin.marinas@....com, will@...nel.org
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] arm64: mm: make linear mapping permission update more
robust for patial range
On 10/14/25 1:05 AM, Ryan Roberts wrote:
> On 14/10/2025 00:27, Yang Shi wrote:
>> The commit fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when changing
>> permissions for vmalloc_huge mappings") made permission update for
>> partial range more robust. But the linear mapping permission update
>> still assumes update the whole range by iterating from the first page
>> all the way to the last page of the area.
>>
>> Make it more robust by updating the linear mapping permission from the
>> page mapped by start address, and update the number of numpages.
>>
>> Fixes: fcf8dda8cc48 ("arm64: pageattr: Explicitly bail out when changing permissions for vmalloc_huge mappings")
> I don't think this is the correct commit. I think this theoretical issue was
> present before that. But it is only theoretical AFAIK? In which case, I'd be
> inclined to just drop the tag.
OK, I will drop the tag.
>
> Otherwise, LGTM:
>
> Reviewed-by: Ryan Roberts <ryan.roberts@....com>
Thank you.
Yang
>
>> Signed-off-by: Yang Shi <yang@...amperecomputing.com>
>> ---
>> arch/arm64/mm/pageattr.c | 6 +++---
>> 1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index 5135f2d66958..c21a2c319028 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -148,7 +148,6 @@ static int change_memory_common(unsigned long addr, int numpages,
>> unsigned long size = PAGE_SIZE * numpages;
>> unsigned long end = start + size;
>> struct vm_struct *area;
>> - int i;
>>
>> if (!PAGE_ALIGNED(addr)) {
>> start &= PAGE_MASK;
>> @@ -184,8 +183,9 @@ static int change_memory_common(unsigned long addr, int numpages,
>> */
>> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
>> pgprot_val(clear_mask) == PTE_RDONLY)) {
>> - for (i = 0; i < area->nr_pages; i++) {
>> - __change_memory_common((u64)page_address(area->pages[i]),
>> + unsigned long idx = (start - (unsigned long)area->addr) >> PAGE_SHIFT;
>> + for (int i = 0; i < numpages; i++) {
>> + __change_memory_common((u64)page_address(area->pages[idx++]),
>> PAGE_SIZE, set_mask, clear_mask);
>> }
>> }
Powered by blists - more mailing lists