[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251013232803.3065100-3-yang@os.amperecomputing.com>
Date: Mon, 13 Oct 2025 16:27:31 -0700
From: Yang Shi <yang@...amperecomputing.com>
To: ryan.roberts@....com,
dev.jain@....com,
cl@...two.org,
catalin.marinas@....com,
will@...nel.org
Cc: yang@...amperecomputing.com,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: [PATCH 2/2] arm64: mm: relax VM_ALLOW_HUGE_VMAP if BBML2_NOABORT is supported
When changing permissions for vmalloc area, VM_ALLOW_HUGE_VMAP area is
exclueded because kernel can't split the va mapping if it is called on
partial range.
It is no longer true if the machines support BBML2_NOABORT after commit
a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full").
So we can relax this restriction and update the comments accordingly.
Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
Signed-off-by: Yang Shi <yang@...amperecomputing.com>
---
arch/arm64/mm/pageattr.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index c21a2c319028..b4dcae6273a8 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -157,13 +157,13 @@ static int change_memory_common(unsigned long addr, int numpages,
/*
* Kernel VA mappings are always live, and splitting live section
- * mappings into page mappings may cause TLB conflicts. This means
- * we have to ensure that changing the permission bits of the range
- * we are operating on does not result in such splitting.
+ * mappings into page mappings may cause TLB conflicts on the machines
+ * which don't support BBML2_NOABORT.
*
* Let's restrict ourselves to mappings created by vmalloc (or vmap).
- * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page
- * mappings are updated and splitting is never needed.
+ * Disallow VM_ALLOW_HUGE_VMAP mappings if the systems don't support
+ * BBML2_NOABORT to guarantee that only page mappings are updated and
+ * splitting is never needed on those machines.
*
* So check whether the [addr, addr + size) interval is entirely
* covered by precisely one VM area that has the VM_ALLOC flag set.
@@ -171,7 +171,8 @@ static int change_memory_common(unsigned long addr, int numpages,
area = find_vm_area((void *)addr);
if (!area ||
end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
- ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
+ !(area->flags & VM_ALLOC) || ((area->flags & VM_ALLOW_HUGE_VMAP) &&
+ !system_supports_bbml2_noabort()))
return -EINVAL;
if (!numpages)
--
2.47.0
Powered by blists - more mailing lists