[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0aac96b5-b3ac-47ee-97af-7ca5d927bdd0@arm.com>
Date: Tue, 1 Apr 2025 10:43:01 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Mike Rapoport <rppt@...nel.org>
Cc: Dev Jain <dev.jain@....com>, catalin.marinas@....com, will@...nel.org,
gshan@...hat.com, steven.price@....com, suzuki.poulose@....com,
tianyaxiong@...inos.cn, ardb@...nel.org, david@...hat.com, urezki@...il.com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64: pageattr: Explicitly bail out when changing
permissions for vmalloc_huge mappings
On 30/03/2025 03:32, Mike Rapoport wrote:
> On Sat, Mar 29, 2025 at 09:46:56AM +0000, Ryan Roberts wrote:
>> On 28/03/2025 18:50, Mike Rapoport wrote:
>>> On Fri, Mar 28, 2025 at 11:51:03AM +0530, Dev Jain wrote:
>>>> arm64 uses apply_to_page_range to change permissions for kernel VA mappings,
>>>
>>> for vmalloc mappings ^
>>>
>>> arm64 does not allow changing permissions to any VA address right now.
>>>
>>>> which does not support changing permissions for leaf mappings. This function
>>>> will change permissions until it encounters a leaf mapping, and will bail
>>>> out. To avoid this partial change, explicitly disallow changing permissions
>>>> for VM_ALLOW_HUGE_VMAP mappings.
>>>>
>>>> Signed-off-by: Dev Jain <dev.jain@....com>
>>
>> I wonder if we want a Fixes: tag here? It's certainly a *latent* bug.
>
> We have only a few places that use vmalloc_huge() or VM_ALLOW_HUGE_VMAP and
> if there was a code that plays permission games on these allocations, x86
> set_memory would blow up immediately, so I don't think Fixes: is needed
> here.
Hi Mike,
I think I may have misunderstood your comments when we spoke at LSF/MM the other
day, as this statement seems to contradict. I thought you said that on x86 BPF
allocates memory using vmalloc_huge()/VM_ALLOW_HUGE_VMAP and then it's
sub-allocator will set_memory_*() on a sub-region of that allocation? (And we
then agreed that it would be good for arm64 to eventually support this with BBML2).
Anyway, regardless, I think this change is useful first step to improving
vmalloc as it makes us more defensive against any future attempt to change
permissions on a huge allocation. In the long term I'd like to get to the point
where arm64 (with BBML2) can map with VM_ALLOW_HUGE_VMAP by default.
Thanks,
Ryan
>
>>>> ---
>>>> arch/arm64/mm/pageattr.c | 4 ++--
>>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>>>> index 39fd1f7ff02a..8337c88eec69 100644
>>>> --- a/arch/arm64/mm/pageattr.c
>>>> +++ b/arch/arm64/mm/pageattr.c
>>>> @@ -96,7 +96,7 @@ static int change_memory_common(unsigned long addr, int numpages,
>>>> * we are operating on does not result in such splitting.
>>>> *
>>>> * Let's restrict ourselves to mappings created by vmalloc (or vmap).
>>>> - * Those are guaranteed to consist entirely of page mappings, and
>>>> + * Disallow VM_ALLOW_HUGE_VMAP vmalloc mappings so that
>>>
>>> I'd keep mention of page mappings in the comment, e.g
>>>
>>> * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page
>>> * mappings are updated and splitting is never needed.
>>>
>>> With this and changelog updates Ryan asked for
>>>
>>> Reviewed-by: Mike Rapoport (Microsoft) <rppt@...nel.org>
>>>
>>>
>>>> * splitting is never needed.
>>>> *
>>>> * So check whether the [addr, addr + size) interval is entirely
>>>> @@ -105,7 +105,7 @@ static int change_memory_common(unsigned long addr, int numpages,
>>>> area = find_vm_area((void *)addr);
>>>> if (!area ||
>>>> end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
>>>> - !(area->flags & VM_ALLOC))
>>>> + ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
>>>> return -EINVAL;
>>>>
>>>> if (!numpages)
>>>> --
>>>> 2.30.2
>>>>
>>>
>>
>
Powered by blists - more mailing lists