[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <833b32ef-20f8-44a5-9d00-e56f818e49ca@os.amperecomputing.com>
Date: Tue, 14 Oct 2025 13:23:33 -0700
From: Yang Shi <yang@...amperecomputing.com>
To: Ryan Roberts <ryan.roberts@....com>, dev.jain@....com, cl@...two.org,
catalin.marinas@....com, will@...nel.org
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] arm64: mm: relax VM_ALLOW_HUGE_VMAP if BBML2_NOABORT
is supported
On 10/14/25 1:08 AM, Ryan Roberts wrote:
> On 14/10/2025 00:27, Yang Shi wrote:
>> When changing permissions for vmalloc area, VM_ALLOW_HUGE_VMAP area is
>> exclueded because kernel can't split the va mapping if it is called on
>> partial range.
>> It is no longer true if the machines support BBML2_NOABORT after commit
>> a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full").
>> So we can relax this restriction and update the comments accordingly.
> Is there actually any user that benefits from this modified behaviour in the
> current kernel? If not, then I'd prefer to leave this for Dev to modify
> systematically as part of his series to enable VM_ALLOW_HUGE_VMAP by default for
> arm64. I believe he's planning to post that soon.
I actually just wanted to fix the stale comment about "splitting is
never needed" in the first place as we discussed in earlier review, but
I realized it doesn't make too much sense to just update the comment
itself w/o updating the behavior. Because the skipping
VM_ALLOW_HUGE_VMAP behavior was added because va mapping can't be split,
so it makes more sense to relax it if va mapping can be split.
I looked at this patch more like a follow-up fix for split kernel page
table than an enhancement for VM_ALLOW_HUGE_VMAP. And it also seems like
a prerequisite for Dev's series IMHO.
Thanks,
Yang
>
> Thanks,
> Ryan
>
>> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
>> Signed-off-by: Yang Shi <yang@...amperecomputing.com>
>> ---
>> arch/arm64/mm/pageattr.c | 13 +++++++------
>> 1 file changed, 7 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
>> index c21a2c319028..b4dcae6273a8 100644
>> --- a/arch/arm64/mm/pageattr.c
>> +++ b/arch/arm64/mm/pageattr.c
>> @@ -157,13 +157,13 @@ static int change_memory_common(unsigned long addr, int numpages,
>>
>> /*
>> * Kernel VA mappings are always live, and splitting live section
>> - * mappings into page mappings may cause TLB conflicts. This means
>> - * we have to ensure that changing the permission bits of the range
>> - * we are operating on does not result in such splitting.
>> + * mappings into page mappings may cause TLB conflicts on the machines
>> + * which don't support BBML2_NOABORT.
>> *
>> * Let's restrict ourselves to mappings created by vmalloc (or vmap).
>> - * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page
>> - * mappings are updated and splitting is never needed.
>> + * Disallow VM_ALLOW_HUGE_VMAP mappings if the systems don't support
>> + * BBML2_NOABORT to guarantee that only page mappings are updated and
>> + * splitting is never needed on those machines.
>> *
>> * So check whether the [addr, addr + size) interval is entirely
>> * covered by precisely one VM area that has the VM_ALLOC flag set.
>> @@ -171,7 +171,8 @@ static int change_memory_common(unsigned long addr, int numpages,
>> area = find_vm_area((void *)addr);
>> if (!area ||
>> end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
>> - ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
>> + !(area->flags & VM_ALLOC) || ((area->flags & VM_ALLOW_HUGE_VMAP) &&
>> + !system_supports_bbml2_noabort()))
>> return -EINVAL;
>>
>> if (!numpages)
Powered by blists - more mailing lists