[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a5439884-551c-4104-9175-f95b0895a489@quicinc.com>
Date: Tue, 18 Feb 2025 11:07:28 +0800
From: Zhenhua Huang <quic_zhenhuah@...cinc.com>
To: David Hildenbrand <david@...hat.com>, <anshuman.khandual@....com>,
<catalin.marinas@....com>
CC: <will@...nel.org>, <ardb@...nel.org>, <ryan.roberts@....com>,
<mark.rutland@....com>, <joey.gouly@....com>,
<dave.hansen@...ux.intel.com>, <akpm@...ux-foundation.org>,
<chenfeiyang@...ngson.cn>, <chenhuacai@...nel.org>,
<linux-mm@...ck.org>, <linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <quic_tingweiz@...cinc.com>
Subject: Re: [PATCH v7] arm64: mm: Populate vmemmap at the page level if not
section aligned
On 2025/2/17 22:30, David Hildenbrand wrote:
> On 17.02.25 11:34, Zhenhua Huang wrote:
>>
>>
>> On 2025/2/17 17:44, David Hildenbrand wrote:
>>> On 17.02.25 10:29, Zhenhua Huang wrote:
>>>> On the arm64 platform with 4K base page config, SECTION_SIZE_BITS is
>>>> set
>>>> to 27, making one section 128M. The related page struct which vmemmap
>>>> points to is 2M then.
>>>> Commit c1cc1552616d ("arm64: MMU initialisation") optimizes the
>>>> vmemmap to populate at the PMD section level which was suitable
>>>> initially since hot plug granule is always one section(128M). However,
>>>> commit ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
>>>> introduced a 2M(SUBSECTION_SIZE) hot plug granule, which disrupted the
>>>> existing arm64 assumptions.
>>>>
>>>> The first problem is that if start or end is not aligned to a section
>>>> boundary, such as when a subsection is hot added, populating the entire
>>>> section is wasteful.
>>>>
>>>> The Next problem is if we hotplug something that spans part of 128 MiB
>>>> section (subsections, let's call it memblock1), and then hotplug
>>>> something
>>>> that spans another part of a 128 MiB section(subsections, let's call it
>>>> memblock2), and subsequently unplug memblock1, vmemmap_free() will
>>>> clear
>>>> the entire PMD entry which also supports memblock2 even though
>>>> memblock2
>>>> is still active.
>>>>
>>>> Assuming hotplug/unplug sizes are guaranteed to be symmetric. Do the
>>>> fix similar to x86-64: populate to pages levels if start/end is not
>>>> aligned
>>>> with section boundary.
>>>>
>>>> Signed-off-by: Zhenhua Huang <quic_zhenhuah@...cinc.com>
>>>> ---
>>>> arch/arm64/mm/mmu.c | 3 ++-
>>>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>>> index b4df5bc5b1b8..eec1666da368 100644
>>>> --- a/arch/arm64/mm/mmu.c
>>>> +++ b/arch/arm64/mm/mmu.c
>>>> @@ -1178,7 +1178,8 @@ int __meminit vmemmap_populate(unsigned long
>>>> start, unsigned long end, int node,
>>>> {
>>>> WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
>>>> - if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
>>>> + if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) ||
>>>> + (end - start < PAGES_PER_SECTION * sizeof(struct page)))
>>>> return vmemmap_populate_basepages(start, end, node, altmap);
>>>> else
>>>> return vmemmap_populate_hugepages(start, end, node, altmap);
>>>
>>> Yes, this does mimic what x86 does. That handling does look weird,
>>> because it
>>> doesn't care about any address alignments, only about the size, which is
>>> odd.
>>>
>>> I wonder if we could do better and move this handling
>>> into vmemmap_populate_hugepages(), where we already have a fallback
>>> to vmemmap_populate_basepages().
>>
>> Hi David,
>>
>> I had the same doubt initially.
>> After going through the codes, I noticed for vmemmap_populate(), the
>> arguments "start" and "end" passed down should already be within one
>> section.
>> early path:
>> for_each_present_section_nr
>> __populate_section_memmap
>> ..
>> vmemmap_populate()
>>
>> hotplug path:
>> __add_pages
>> section_activate
>> vmemmap_populate()
>>
>> Therefore.. focusing only on the size seems OK to me, and fall back
>> solution below appears unnecessary?
>
> Ah, in that case it is fine. Might make sense to document/enforce that
> somehow for the time being ...
Shall I document and WARN_ON if the size exceeds? like:
WARN_ON(end - start > PAGES_PER_SECTION * sizeof(struct page))
Since vmemmap_populate() is implemented per architecture, the change
should apply for other architectures as well. However I have no setup to
test it on...
Therefore, May I implement it only for arm64 now ?
Additionally, from previous discussion, the change is worth
backporting(apologies for missing to CC stable kernel in this version).
Keeping it for arm64 should simplify for backporting. WDYT?
>
>
>>> +/*
>>> + * Try to populate PMDs, but fallback to populating base pages when
>>> ranges
>>> + * would only partially cover a PMD.
>>> + */
>>> int __meminit vmemmap_populate_hugepages(unsigned long start,
>>> unsigned
>>> long end,
>>> int node, struct vmem_altmap
>>> *altmap)
>>> {
>>> @@ -313,6 +317,9 @@ int __meminit vmemmap_populate_hugepages(unsigned
>>> long start, unsigned long end,
>>> for (addr = start; addr < end; addr = next) {
>>
>> This for loop appears to be redundant for arm64 as well, as above
>> mentioned, a single call to pmd_addr_end() should suffice.
>
> Right, that was what was confusing me in the first place.
>
>>
>>> next = pmd_addr_end(addr, end);
>>>
>>> + if (!IS_ALIGNED(addr, PMD_SIZE) || !IS_ALIGNED(next,
>>> PMD_SIZE))
>>> + goto fallback;
>>> +
>>> pgd = vmemmap_pgd_populate(addr, node);
>>> if (!pgd)
>>> return -ENOMEM;
>>> @@ -346,6 +353,7 @@ int __meminit vmemmap_populate_hugepages(unsigned
>>> long start, unsigned long end,
>>> }
>>> } else if (vmemmap_check_pmd(pmd, node, addr, next))
>>> continue;
>>> +fallback:
>>> if (vmemmap_populate_basepages(addr, next, node,
>>> altmap))
>>> return -ENOMEM;
>>
>> It seems we have no chance to call populate_basepages here?
>
>
> Can you elaborate?
It's invoked within vmemmap_populate_hugepages(), which is called by
vmemmap_populate(). This implies that we are always performing a whole
section hotplug?
However, since it's common code used by other architectures like x86,
RISC-V and LoongArch, it is still necessary to review the code for these
architectures as well. At the very least, it's not a BUG :)
>
>
Powered by blists - more mailing lists