[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9fdb8448-7635-46a9-8e81-a738eaa098ef@arm.com>
Date: Mon, 3 Nov 2025 14:04:37 +0530
From: Dev Jain <dev.jain@....com>
To: Anshuman Khandual <anshuman.khandual@....com>, catalin.marinas@....com,
 will@...nel.org
Cc: ryan.roberts@....com, rppt@...nel.org, shijie@...amperecomputing.com,
 yang@...amperecomputing.com, linux-arm-kernel@...ts.infradead.org,
 linux-kernel@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [PATCH] arm64/pageattr: Propagate return value from
 __change_memory_common
On 03/11/25 1:18 pm, Anshuman Khandual wrote:
> On 03/11/25 11:43 AM, Dev Jain wrote:
>> Post a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full"),
>> __change_memory_common has a real chance of failing due to split failure.
>> Before that commit, this line was introduced in c55191e96caa, still having
> A small nit:
>
> Commit description needs to follow after the SHA ID ^^^^^^^^^^
Didn't do that for brevity's sake, it is there in the fixes tag.
>> a chance of failing if it needs to allocate pagetable memory in
>> apply_to_page_range, although that has never been observed to be true.
>> In general, we should always propagate the return value to the caller.
>>
>> Cc: stable@...r.kernel.org
>> Fixes: c55191e96caa ("arm64: mm: apply r/o permissions of VM areas to its linear alias as well")
> Does is really need a Fixes: ? There is no problem which is being fixed.
If an error happens in the linear map alias permission change, it will be
suppressed due to the return value not being checked.
>> Signed-off-by: Dev Jain <dev.jain@....com>
>> ---
>> Based on Linux 6.18-rc4.
>>
>>   arch/arm64/mm/pageattr.c | 5 ++++-
>>   1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index 5135f2d66958..b4ea86cd3a71 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages,
>>   	unsigned long size = PAGE_SIZE * numpages;
>>   	unsigned long end = start + size;
>>   	struct vm_struct *area;
>> +	int ret;
>>   	int i;
>>   
>>   	if (!PAGE_ALIGNED(addr)) {
>> @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages,
>>   	if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
>>   			    pgprot_val(clear_mask) == PTE_RDONLY)) {
>>   		for (i = 0; i < area->nr_pages; i++) {
>> -			__change_memory_common((u64)page_address(area->pages[i]),
>> +			ret = __change_memory_common((u64)page_address(area->pages[i]),
>>   					       PAGE_SIZE, set_mask, clear_mask);
>> +			if (ret)
>> +				return ret;
>>   		}
>>   	}
> Although the change does make sense.
Powered by blists - more mailing lists