lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <00c82c92-35a0-441c-b5b5-e4a6c8a4a9b7@redhat.com>
Date: Wed, 26 Feb 2025 18:13:49 +0100
From: David Hildenbrand <david@...hat.com>
To: Zhenhua Huang <quic_zhenhuah@...cinc.com>, anshuman.khandual@....com,
 catalin.marinas@....com
Cc: will@...nel.org, ardb@...nel.org, ryan.roberts@....com,
 mark.rutland@....com, joey.gouly@....com, dave.hansen@...ux.intel.com,
 akpm@...ux-foundation.org, chenfeiyang@...ngson.cn, chenhuacai@...nel.org,
 linux-mm@...ck.org, linux-arm-kernel@...ts.infradead.org,
 linux-kernel@...r.kernel.org, quic_tingweiz@...cinc.com
Subject: Re: [PATCH v7] arm64: mm: Populate vmemmap at the page level if not
 section aligned

Sorry, I somehow missed this mail.

>>> Hi David,
>>>
>>> I had the same doubt initially.
>>> After going through the codes, I noticed for vmemmap_populate(), the
>>> arguments "start" and "end" passed down should already be within one
>>> section.
>>> early path:
>>> for_each_present_section_nr
>>>      __populate_section_memmap
>>>          ..
>>>          vmemmap_populate()
>>>
>>> hotplug path:
>>> __add_pages
>>>      section_activate
>>>          vmemmap_populate()
>>>
>>> Therefore.. focusing only on the size seems OK to me, and fall back
>>> solution below appears unnecessary?
>>
>> Ah, in that case it is fine. Might make sense to document/enforce that
>> somehow for the time being ...
> 
> Shall I document and WARN_ON if the size exceeds? like:
> WARN_ON(end - start > PAGES_PER_SECTION * sizeof(struct page))

Probably WARN_ON_ONCE() along with a comment that we should never exceed 
a single memory section.

> 
> Since vmemmap_populate() is implemented per architecture, the change
> should apply for other architectures as well. However I have no setup to
> test it on...
> Therefore, May I implement it only for arm64 now ?

Would work for me; better than no warning.

> Additionally, from previous discussion, the change is worth
> backporting(apologies for missing to CC stable kernel in this version).
> Keeping it for arm64 should simplify for backporting. WDYT?

Jup. Of course, we could add a generic warning in a separate patch.

> 
>>
>>
>>>> +/*
>>>> + * Try to populate PMDs, but fallback to populating base pages when
>>>> ranges
>>>> + * would only partially cover a PMD.
>>>> + */
>>>>     int __meminit vmemmap_populate_hugepages(unsigned long start,
>>>> unsigned
>>>> long end,
>>>>                                             int node, struct vmem_altmap
>>>> *altmap)
>>>>     {
>>>> @@ -313,6 +317,9 @@ int __meminit vmemmap_populate_hugepages(unsigned
>>>> long start, unsigned long end,
>>>>            for (addr = start; addr < end; addr = next) {
>>>
>>> This for loop appears to be redundant for arm64 as well, as above
>>> mentioned, a single call to pmd_addr_end() should suffice.
>>
>> Right, that was what was confusing me in the first place.
>>
>>>
>>>>                    next = pmd_addr_end(addr, end);
>>>>
>>>> +               if (!IS_ALIGNED(addr, PMD_SIZE) || !IS_ALIGNED(next,
>>>> PMD_SIZE))
>>>> +                       goto fallback;
>>>> +
>>>>                    pgd = vmemmap_pgd_populate(addr, node);
>>>>                    if (!pgd)
>>>>                            return -ENOMEM;
>>>> @@ -346,6 +353,7 @@ int __meminit vmemmap_populate_hugepages(unsigned
>>>> long start, unsigned long end,
>>>>                            }
>>>>                    } else if (vmemmap_check_pmd(pmd, node, addr, next))
>>>>                            continue;
>>>> +fallback:
>>>>                    if (vmemmap_populate_basepages(addr, next, node,
>>>> altmap))
>>>>                            return -ENOMEM;
>>>
>>> It seems we have no chance to call populate_basepages here?
>>
>>
>> Can you elaborate?
> 
> It's invoked within vmemmap_populate_hugepages(), which is called by
> vmemmap_populate(). This implies that we are always performing a whole
> section hotplug?

Ah, you meant only in the context of this change, yes. I was confused, 
because there are other reasons why we run into that fallback (failing 
to allocate a PMD).

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ