lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <79d23b92-998c-4148-a93c-75979d047d4e@quicinc.com>
Date: Thu, 2 Jan 2025 17:13:53 +0800
From: Zhenhua Huang <quic_zhenhuah@...cinc.com>
To: Anshuman Khandual <anshuman.khandual@....com>,
        Catalin Marinas
	<catalin.marinas@....com>
CC: <will@...nel.org>, <ardb@...nel.org>, <ryan.roberts@....com>,
        <mark.rutland@....com>, <joey.gouly@....com>,
        <dave.hansen@...ux.intel.com>, <akpm@...ux-foundation.org>,
        <chenfeiyang@...ngson.cn>, <chenhuacai@...nel.org>,
        <linux-mm@...ck.org>, <linux-arm-kernel@...ts.infradead.org>,
        <linux-kernel@...r.kernel.org>,
        Tingwei Zhang <quic_tingweiz@...cinc.com>
Subject: Re: [PATCH v2 1/2] arm64: mm: vmemmap populate to page level if not
 section aligned



On 2025/1/2 11:51, Anshuman Khandual wrote:
> On 12/30/24 13:18, Zhenhua Huang wrote:
>> Hi Anshuman,
>>
>> On 2024/12/27 15:49, Anshuman Khandual wrote:
>>> On 12/24/24 19:39, Catalin Marinas wrote:
>>>> On Tue, Dec 24, 2024 at 05:32:06PM +0800, Zhenhua Huang wrote:
>>>>> Thanks Catalin for review!
>>>>> Merry Christmas.
>>>>
>>>> Merry Christmas to you too!
>>>>
>>>>> On 2024/12/21 2:30, Catalin Marinas wrote:
>>>>>> On Mon, Dec 09, 2024 at 05:42:26PM +0800, Zhenhua Huang wrote:
>>>>>>> Fixes: c1cc1552616d ("arm64: MMU initialisation")
>>>>>>
>>>>>> I wouldn't add a fix for the first commit adding arm64 support, we did
>>>>>> not even have memory hotplug at the time (added later in 5.7 by commit
>>>>>> bbd6ec605c0f ("arm64/mm: Enable memory hot remove")). IIUC, this hasn't
>>>>>> been a problem until commit ba72b4c8cf60 ("mm/sparsemem: support
>>>>>> sub-section hotplug"). That commit broke some arm64 assumptions.
>>>>>
>>>>> Shall we add ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
>>>>> because it broke arm64 assumptions ?
>>>>
>>>> Yes, I think that would be better. And a cc stable to 5.4 (the above
>>>> commit appeared in 5.3).
>>>
>>> Agreed. This is a problem which needs fixing but not sure if proposed patch
>>> here fixes that problem.
>>>
>>>>
>>>>>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>>>>>> index e2739b69e11b..fd59ee44960e 100644
>>>>>>> --- a/arch/arm64/mm/mmu.c
>>>>>>> +++ b/arch/arm64/mm/mmu.c
>>>>>>> @@ -1177,7 +1177,9 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
>>>>>>>     {
>>>>>>>         WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
>>>>>>> -    if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
>>>>>>> +    if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) ||
>>>>>>> +    !IS_ALIGNED(page_to_pfn((struct page *)start), PAGES_PER_SECTION) ||
>>>>>>> +    !IS_ALIGNED(page_to_pfn((struct page *)end), PAGES_PER_SECTION))
>>>>>>>             return vmemmap_populate_basepages(start, end, node, altmap);
>>>>>>>         else
>>>>>>>             return vmemmap_populate_hugepages(start, end, node, altmap);
>>>>>>
>>>>>> An alternative would be to fix unmap_hotplug_pmd_range() etc. to avoid
>>>>>> nuking the whole vmemmap pmd section if it's not empty. Not sure how
>>>>>> easy that is, whether we have the necessary information (I haven't
>>>>>> looked in detail).
>>>>>>
>>>>>> A potential issue - can we hotplug 128MB of RAM and only unplug 2MB? If
>>>>>> that's possible, the problem isn't solved by this patch.
>>>
>>> Believe this is possible after sub-section hotplug and hotremove support.
>>>
>>>>>
>>>>> Indeed, seems there is no guarantee that plug size must be equal to unplug
>>>>> size...
>>>>>
>>>>> I have two ideas:
>>>>> 1. Completely disable this PMD mapping optimization since there is no
>>>>> guarantee we must align 128M memory for hotplug ..
>>>>
>>>> I'd be in favour of this, at least if CONFIG_MEMORY_HOTPLUG is enabled.
>>>> I think the only advantage here is that we don't allocate a full 2MB
>>>> block for vmemmap when only plugging in a sub-section.
>>>
>>> Agreed, that will be the right fix for the problem which can be back ported.
>>> We will have to prevent PMD/PUD/CONT mappings for both linear and as well as
>>
>> Thanks Anshuman, yeah.. we must handle linear mapping as well.
>>
>>> vmemmap for all non-boot memory sections, that can be hot-unplugged.
>>>
>>> Something like the following ? [untested]
>>>
>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>> index 216519663961..56b9c6891f46 100644
>>> --- a/arch/arm64/mm/mmu.c
>>> +++ b/arch/arm64/mm/mmu.c
>>> @@ -1171,9 +1171,15 @@ int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node,
>>>    int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
>>>                   struct vmem_altmap *altmap)
>>>    {
>>> +       unsigned long start_pfn;
>>> +       struct mem_section *ms;
>>> +
>>>           WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
>>>    -       if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
>>> +       start_pfn = page_to_pfn((struct page *)start);
>>> +       ms = __pfn_to_section(start_pfn);
>>> +
>>> +       if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) || !early_section(ms))
>>
>> LGTM. I will follow your and Catalin's suggestion to prepare further patches, Thanks!
>>
>>>                   return vmemmap_populate_basepages(start, end, node, altmap);
>>>           else
>>>                   return vmemmap_populate_hugepages(start, end, node, altmap);
>>> @@ -1334,10 +1340,15 @@ struct range arch_get_mappable_range(void)
>>>    int arch_add_memory(int nid, u64 start, u64 size,
>>>                       struct mhp_params *params)
>>>    {
>>> +       unsigned long start_pfn = page_to_pfn((struct page *)start);
>>> +       struct mem_section *ms = __pfn_to_section(start_pfn);
>>>           int ret, flags = NO_EXEC_MAPPINGS;
>>>             VM_BUG_ON(!mhp_range_allowed(start, size, true));
>>>    +       if (!early_section(ms))
>>> +               flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>
>> However, here comes another doubt, given that the subsection size is 2M, shouldn't we have ability to support PMD SECTION MAPPING if CONFIG_ARM64_4K_PAGES? This might be the optimization we want to maintain?>
>> Should we remove NO_BLOCK_MAPPINGS and add more constraints to avoid pud_set_huge if CONFIG_ARM64_4K_PAGES ?
> 
> I guess this has been covered on the thread.

Yeah, the example I shared on separated thread, we can wrap up the 
discussion here :)

> 
>>
>>> +
>>>           if (can_set_direct_map())
>>>                   flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>>>   
>>>>
>>>>> 2. If we want to take this optimization.
>>>>> I propose adding one argument to vmemmap_free to indicate if the entire
>>>>> section is freed(based on subsection map). Vmemmap_free is a common function
>>>>> and might affect other architectures... The process would be:
>>>>> vmemmap_free
>>>>>      unmap_hotplug_range //In unmap_hotplug_pmd_range() as you mentioned:if
>>>>> whole section is freed, proceed as usual. Otherwise, *just clear out struct
>>>>> page content but do not free*.
>>>>>      free_empty_tables // will be called only if entire section is freed
>>>>>
>>>>> On the populate side,
>>>>> else if (vmemmap_check_pmd(pmd, node, addr, next)) //implement this function
>>>>>      continue;    //Buffer still exists, just abort..
>>>>>
>>>>> Could you please comment further whether #2 is feasible ?
>>>>
>>>> vmemmap_free() already gets start/end, so it could at least check the
>>>> alignment and avoid freeing if it's not unplugging a full section. It
>>>
>>> unmap_hotplug_pmd_range()
>>> {
>>>      do {
>>>          if (pmd_sect(pmd)) {
>>>              pmd_clear(pmdp);
>>>              flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
>>>                           if (free_mapped)
>>>                                   free_hotplug_page_range(pmd_page(pmd),
>>>                                                           PMD_SIZE, altmap);
>>>          }
>>>      } while ()
>>> }
>>>
>>> Do you mean clearing the PMD entry but not freeing the mapped page for vmemmap ?
>>> In that case should the hot-unplug fail or not ? If we free the pfns (successful
>>> hot-unplug), then leaving behind entire PMD entry for covering the remaining sub
>>> sections, is going to be problematic as it still maps the removed pfns as well !
>>
>> Could you please help me to understand in which scenarios this might cause issue? I assume we won't touch these struct page further?
> 
> Regardless of whether the non-present pfns are accessed or not from the cpu, having
> page table mappings covering them might probably create corresponding TLB entries ?
> IIUC from an arch perspective, this seems undesirable and possibly some what risky.
> 

Got it.. I know it's tricky indeed.

>>
>>>
>>>> does leave a 2MB vmemmap block in place when freeing the last subsection
>>>> but it's safer than freeing valid struct page entries. In addition, it
>>>> could query the memory hotplug state with something like
>>>> find_memory_block() and figure out whether the section is empty.
>>>
>>> I guess there are two potential solutions, if unmap_hotplug_pmd_range() were to
>>> handle sub-section removal.
>>>
>>> 1) Skip pmd_clear() when entire section is not covered
>>>
>>> a. pmd_clear() only if all but the current subsection have been removed earlier
>>>      via is_subsection_map_empty() or something similar.
>>>
>>> b. Skip pmd_clear() if the entire section covering that PMD is not being removed
>>>      but that might be problematic, as it still maps potentially unavailable pfns,
>>>      which are now hot-unplugged out.
>>>
>>> 2) Break PMD into base pages
>>>
>>> a. pmd_clear() only if all but the current subsection have been removed earlier
>>>      via is_subsection_map_empty() or something similar.
>>>
>>> b. Break entire PMD into base page mappings and remove entries corresponding to
>>>      the subsection being removed. Although the BBM sequence needs to be followed
>>>      while making sure that no other part of the kernel is accessing subsections,
>>>      that are mapped via the erstwhile PMD but currently not being removed.
>>>
>>>>
>>>> Anyway, I'll be off until the new year, maybe I get other ideas by then.
>>>>
>>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ