lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <871c0dae-c419-4ac2-9472-6901aab90dcf@redhat.com>
Date: Mon, 17 Feb 2025 15:30:23 +0100
From: David Hildenbrand <david@...hat.com>
To: Zhenhua Huang <quic_zhenhuah@...cinc.com>, anshuman.khandual@....com,
 catalin.marinas@....com
Cc: will@...nel.org, ardb@...nel.org, ryan.roberts@....com,
 mark.rutland@....com, joey.gouly@....com, dave.hansen@...ux.intel.com,
 akpm@...ux-foundation.org, chenfeiyang@...ngson.cn, chenhuacai@...nel.org,
 linux-mm@...ck.org, linux-arm-kernel@...ts.infradead.org,
 linux-kernel@...r.kernel.org, quic_tingweiz@...cinc.com
Subject: Re: [PATCH v7] arm64: mm: Populate vmemmap at the page level if not
 section aligned

On 17.02.25 11:34, Zhenhua Huang wrote:
> 
> 
> On 2025/2/17 17:44, David Hildenbrand wrote:
>> On 17.02.25 10:29, Zhenhua Huang wrote:
>>> On the arm64 platform with 4K base page config, SECTION_SIZE_BITS is set
>>> to 27, making one section 128M. The related page struct which vmemmap
>>> points to is 2M then.
>>> Commit c1cc1552616d ("arm64: MMU initialisation") optimizes the
>>> vmemmap to populate at the PMD section level which was suitable
>>> initially since hot plug granule is always one section(128M). However,
>>> commit ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
>>> introduced a 2M(SUBSECTION_SIZE) hot plug granule, which disrupted the
>>> existing arm64 assumptions.
>>>
>>> The first problem is that if start or end is not aligned to a section
>>> boundary, such as when a subsection is hot added, populating the entire
>>> section is wasteful.
>>>
>>> The Next problem is if we hotplug something that spans part of 128 MiB
>>> section (subsections, let's call it memblock1), and then hotplug
>>> something
>>> that spans another part of a 128 MiB section(subsections, let's call it
>>> memblock2), and subsequently unplug memblock1, vmemmap_free() will clear
>>> the entire PMD entry which also supports memblock2 even though memblock2
>>> is still active.
>>>
>>> Assuming hotplug/unplug sizes are guaranteed to be symmetric. Do the
>>> fix similar to x86-64: populate to pages levels if start/end is not
>>> aligned
>>> with section boundary.
>>>
>>> Signed-off-by: Zhenhua Huang <quic_zhenhuah@...cinc.com>
>>> ---
>>>    arch/arm64/mm/mmu.c | 3 ++-
>>>    1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>> index b4df5bc5b1b8..eec1666da368 100644
>>> --- a/arch/arm64/mm/mmu.c
>>> +++ b/arch/arm64/mm/mmu.c
>>> @@ -1178,7 +1178,8 @@ int __meminit vmemmap_populate(unsigned long
>>> start, unsigned long end, int node,
>>>    {
>>>        WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
>>> -    if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
>>> +    if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) ||
>>> +        (end - start < PAGES_PER_SECTION * sizeof(struct page)))
>>>            return vmemmap_populate_basepages(start, end, node, altmap);
>>>        else
>>>            return vmemmap_populate_hugepages(start, end, node, altmap);
>>
>> Yes, this does mimic what x86 does. That handling does look weird,
>> because it
>> doesn't care about any address alignments, only about the size, which is
>> odd.
>>
>> I wonder if we could do better and move this handling
>> into vmemmap_populate_hugepages(), where we already have a fallback
>> to vmemmap_populate_basepages().
> 
> Hi David,
> 
> I had the same doubt initially.
> After going through the codes, I noticed for vmemmap_populate(), the
> arguments "start" and "end" passed down should already be within one
> section.
> early path:
> for_each_present_section_nr
> 	__populate_section_memmap
> 		..
> 		vmemmap_populate()
> 
> hotplug path:
> __add_pages
> 	section_activate
> 		vmemmap_populate()
> 
> Therefore.. focusing only on the size seems OK to me, and fall back
> solution below appears unnecessary?

Ah, in that case it is fine. Might make sense to document/enforce that 
somehow for the time being ...


>> +/*
>> + * Try to populate PMDs, but fallback to populating base pages when ranges
>> + * would only partially cover a PMD.
>> + */
>>    int __meminit vmemmap_populate_hugepages(unsigned long start, unsigned
>> long end,
>>                                            int node, struct vmem_altmap
>> *altmap)
>>    {
>> @@ -313,6 +317,9 @@ int __meminit vmemmap_populate_hugepages(unsigned
>> long start, unsigned long end,
>>           for (addr = start; addr < end; addr = next) {
> 
> This for loop appears to be redundant for arm64 as well, as above
> mentioned, a single call to pmd_addr_end() should suffice.

Right, that was what was confusing me in the first place.

> 
>>                   next = pmd_addr_end(addr, end);
>>
>> +               if (!IS_ALIGNED(addr, PMD_SIZE) || !IS_ALIGNED(next,
>> PMD_SIZE))
>> +                       goto fallback;
>> +
>>                   pgd = vmemmap_pgd_populate(addr, node);
>>                   if (!pgd)
>>                           return -ENOMEM;
>> @@ -346,6 +353,7 @@ int __meminit vmemmap_populate_hugepages(unsigned
>> long start, unsigned long end,
>>                           }
>>                   } else if (vmemmap_check_pmd(pmd, node, addr, next))
>>                           continue;
>> +fallback:
>>                   if (vmemmap_populate_basepages(addr, next, node, altmap))
>>                           return -ENOMEM;
> 
> It seems we have no chance to call populate_basepages here?


Can you elaborate?


-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ