[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9f55664b-c510-4565-943b-0bed2d43898d@arm.com>
Date: Tue, 4 Nov 2025 18:52:49 +0530
From: Dev Jain <dev.jain@....com>
To: Will Deacon <will@...nel.org>
Cc: Yang Shi <yang@...amperecomputing.com>, catalin.marinas@....com,
ryan.roberts@....com, rppt@...nel.org, shijie@...amperecomputing.com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Subject: Re: [PATCH] arm64/pageattr: Propagate return value from
__change_memory_common
On 04/11/25 6:26 pm, Will Deacon wrote:
> On Tue, Nov 04, 2025 at 09:06:12AM +0530, Dev Jain wrote:
>> On 04/11/25 12:15 am, Yang Shi wrote:
>>> On 11/3/25 7:16 AM, Will Deacon wrote:
>>>> On Mon, Nov 03, 2025 at 11:43:06AM +0530, Dev Jain wrote:
>>>>> Post a166563e7ec3 ("arm64: mm: support large block mapping when
>>>>> rodata=full"),
>>>>> __change_memory_common has a real chance of failing due to split
>>>>> failure.
>>>>> Before that commit, this line was introduced in c55191e96caa,
>>>>> still having
>>>>> a chance of failing if it needs to allocate pagetable memory in
>>>>> apply_to_page_range, although that has never been observed to be true.
>>>>> In general, we should always propagate the return value to the caller.
>>>>>
>>>>> Cc: stable@...r.kernel.org
>>>>> Fixes: c55191e96caa ("arm64: mm: apply r/o permissions of VM
>>>>> areas to its linear alias as well")
>>>>> Signed-off-by: Dev Jain <dev.jain@....com>
>>>>> ---
>>>>> Based on Linux 6.18-rc4.
>>>>>
>>>>> arch/arm64/mm/pageattr.c | 5 ++++-
>>>>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>>>>> index 5135f2d66958..b4ea86cd3a71 100644
>>>>> --- a/arch/arm64/mm/pageattr.c
>>>>> +++ b/arch/arm64/mm/pageattr.c
>>>>> @@ -148,6 +148,7 @@ static int change_memory_common(unsigned
>>>>> long addr, int numpages,
>>>>> unsigned long size = PAGE_SIZE * numpages;
>>>>> unsigned long end = start + size;
>>>>> struct vm_struct *area;
>>>>> + int ret;
>>>>> int i;
>>>>> if (!PAGE_ALIGNED(addr)) {
>>>>> @@ -185,8 +186,10 @@ static int change_memory_common(unsigned
>>>>> long addr, int numpages,
>>>>> if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
>>>>> pgprot_val(clear_mask) == PTE_RDONLY)) {
>>>>> for (i = 0; i < area->nr_pages; i++) {
>>>>> - __change_memory_common((u64)page_address(area->pages[i]),
>>>>> + ret =
>>>>> __change_memory_common((u64)page_address(area->pages[i]),
>>>>> PAGE_SIZE, set_mask, clear_mask);
>>>>> + if (ret)
>>>>> + return ret;
>>>> Hmm, this means we can return failure half-way through the operation. Is
>>>> that something callers are expecting to handle? If so, how can they tell
>>>> how far we got?
>>> IIUC the callers don't have to know whether it is half-way or not
>>> because the callers will change the permission back (e.g. to RW) for the
>>> whole range when freeing memory.
>> Yes, it is the caller's responsibility to set VM_FLUSH_RESET_PERMS flag.
>> Upon vfree(), it will change the direct map permissions back to RW.
> Ok, but vfree() ends up using update_range_prot() to do that and if we
> need to worry about that failing (as per your commit message), then
> we're in trouble because the calls to set_area_direct_map() are unchecked.
>
> In other words, this patch is either not necessary or it is incomplete.
I think we had concluded in the discussion of the linear map series that those
calls will always succeed - I'll refresh my memory on that and get back to you later!
>
> Will
Powered by blists - more mailing lists