[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z6zoWMejCDlN2YF9@arm.com>
Date: Wed, 12 Feb 2025 18:28:40 +0000
From: Catalin Marinas <catalin.marinas@....com>
To: Zhenhua Huang <quic_zhenhuah@...cinc.com>
Cc: anshuman.khandual@....com, will@...nel.org, ardb@...nel.org,
ryan.roberts@....com, mark.rutland@....com, joey.gouly@....com,
dave.hansen@...ux.intel.com, akpm@...ux-foundation.org,
chenfeiyang@...ngson.cn, chenhuacai@...nel.org, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
quic_tingweiz@...cinc.com, stable@...r.kernel.org
Subject: Re: [PATCH v5] arm64: mm: Populate vmemmap/linear at the page level
for hotplugged sections
On Thu, Jan 09, 2025 at 05:38:24PM +0800, Zhenhua Huang wrote:
> On the arm64 platform with 4K base page config, SECTION_SIZE_BITS is set
> to 27, making one section 128M. The related page struct which vmemmap
> points to is 2M then.
> Commit c1cc1552616d ("arm64: MMU initialisation") optimizes the
> vmemmap to populate at the PMD section level which was suitable
> initially since hot plug granule is always one section(128M). However,
> commit ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
> introduced a 2M(SUBSECTION_SIZE) hot plug granule, which disrupted the
> existing arm64 assumptions.
>
> Considering the vmemmap_free -> unmap_hotplug_pmd_range path, when
> pmd_sect() is true, the entire PMD section is cleared, even if there is
> other effective subsection. For example page_struct_map1 and
> page_strcut_map2 are part of a single PMD entry and they are hot-added
> sequentially. Then page_struct_map1 is removed, vmemmap_free() will clear
> the entire PMD entry freeing the struct page map for the whole section,
> even though page_struct_map2 is still active. Similar problem exists
> with linear mapping as well, for 16K base page(PMD size = 32M) or 64K
> base page(PMD = 512M), their block mappings exceed SUBSECTION_SIZE.
> Tearing down the entire PMD mapping too will leave other subsections
> unmapped in the linear mapping.
>
> To address the issue, we need to prevent PMD/PUD/CONT mappings for both
> linear and vmemmap for non-boot sections if corresponding size on the
> given base page exceeds SUBSECTION_SIZE(2MB now).
>
> Cc: stable@...r.kernel.org # v5.4+
> Fixes: ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
> Signed-off-by: Zhenhua Huang <quic_zhenhuah@...cinc.com>
> ---
> Hi Catalin and Anshuman,
> I have addressed comments so far, please help review.
> One outstanding point which not finalized is in vmemmap_populate(): how to judge hotplug
> section. Currently I am using system_state, discussion:
> https://lore.kernel.org/linux-mm/1515dae4-cb53-4645-8c72-d33b27ede7eb@quicinc.com/
The patch looks fine to me, apart from one nit and a question below:
> @@ -1339,9 +1349,27 @@ int arch_add_memory(int nid, u64 start, u64 size,
> struct mhp_params *params)
> {
> int ret, flags = NO_EXEC_MAPPINGS;
> + unsigned long start_pfn = PFN_DOWN(start);
> + struct mem_section *ms = __pfn_to_section(start_pfn);
>
> VM_BUG_ON(!mhp_range_allowed(start, size, true));
>
> + /* should not be invoked by early section */
> + WARN_ON(early_section(ms));
I don't remember the discussion, do we still need this warning here if
the sections are not marked as early? I guess we can keep it if one does
an arch_add_memory() on an early section.
I think I suggested to use a WARN_ON_ONCE(!present_section()) but I
completely forgot the memory hotplug code paths.
> +
> + /*
> + * Disallow BlOCK/CONT mappings if the corresponding size exceeds
Nit: capital L in BlOCK.
Either way,
Reviewed-by: Catalin Marinas <catalin.marinas@....com>
Powered by blists - more mailing lists