[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241209094227.1529977-2-quic_zhenhuah@quicinc.com>
Date: Mon, 9 Dec 2024 17:42:26 +0800
From: Zhenhua Huang <quic_zhenhuah@...cinc.com>
To: <catalin.marinas@....com>, <will@...nel.org>, <ardb@...nel.org>,
<ryan.roberts@....com>, <mark.rutland@....com>, <joey.gouly@....com>,
<dave.hansen@...ux.intel.com>, <akpm@...ux-foundation.org>,
<chenfeiyang@...ngson.cn>, <chenhuacai@...nel.org>
CC: <linux-mm@...ck.org>, <linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>,
Zhenhua Huang <quic_zhenhuah@...cinc.com>
Subject: [PATCH v2 1/2] arm64: mm: vmemmap populate to page level if not section aligned
Commit c1cc1552616d ("arm64: MMU initialisation")
optimizes the vmemmap to populate at the PMD section level. However, if
start or end is not aligned to a section boundary, such as when a
subsection is hot added, populating the entire section is wasteful. For
instance, if only one subsection hot-added, the entire section's struct
page metadata will still be populated.In such cases, it is more effective
to populate at page granularity.
This change also addresses mismatch issues during vmemmap_free(): When
pmd_sect() is true, the entire PMD section is cleared, even if there is
other effective subsection. For example, pagemap1 and pagemap2 are part
of a single PMD entry and they are hot-added sequentially. Then pagemap1
is removed, vmemmap_free() will clear the entire PMD entry, freeing the
struct page metadata for the whole section, even though pagemap2 is still
active.
Fixes: c1cc1552616d ("arm64: MMU initialisation")
Signed-off-by: Zhenhua Huang <quic_zhenhuah@...cinc.com>
---
arch/arm64/mm/mmu.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index e2739b69e11b..fd59ee44960e 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1177,7 +1177,9 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
{
WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
- if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
+ if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) ||
+ !IS_ALIGNED(page_to_pfn((struct page *)start), PAGES_PER_SECTION) ||
+ !IS_ALIGNED(page_to_pfn((struct page *)end), PAGES_PER_SECTION))
return vmemmap_populate_basepages(start, end, node, altmap);
else
return vmemmap_populate_hugepages(start, end, node, altmap);
--
2.25.1
Powered by blists - more mailing lists