lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2e942706-5772-0a93-bab3-902644c578e7@oracle.com>
Date:   Wed, 6 Sep 2023 10:26:09 +0100
From:   Joao Martins <joao.m.martins@...cle.com>
To:     Muchun Song <songmuchun@...edance.com>,
        Mike Kravetz <mike.kravetz@...cle.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Oscar Salvador <osalvador@...e.de>,
        David Hildenbrand <david@...hat.com>,
        Miaohe Lin <linmiaohe@...wei.com>,
        David Rientjes <rientjes@...gle.com>,
        Anshuman Khandual <anshuman.khandual@....com>,
        Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
        Michal Hocko <mhocko@...e.com>,
        Matthew Wilcox <willy@...radead.org>,
        Xiongchun Duan <duanxiongchun@...edance.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        muchun.song@...ux.dev
Subject: Re: [External] Re: [PATCH v2 09/11] hugetlb: batch PMD split for bulk
 vmemmap dedup



On 06/09/2023 10:11, Muchun Song wrote:
> On Wed, Sep 6, 2023 at 4:25 PM Muchun Song <muchun.song@...ux.dev> wrote:
>>
>>
>>
>> On 2023/9/6 05:44, Mike Kravetz wrote:
>>> From: Joao Martins <joao.m.martins@...cle.com>
>>>
>>> In an effort to minimize amount of TLB flushes, batch all PMD splits
>>> belonging to a range of pages in order to perform only 1 (global) TLB
>>> flush.
>>>
>>> Rebased and updated by Mike Kravetz
>>>
>>> Signed-off-by: Joao Martins <joao.m.martins@...cle.com>
>>> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
>>> ---
>>>   mm/hugetlb_vmemmap.c | 72 +++++++++++++++++++++++++++++++++++++++++---
>>>   1 file changed, 68 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
>>> index a715712df831..d956551699bc 100644
>>> --- a/mm/hugetlb_vmemmap.c
>>> +++ b/mm/hugetlb_vmemmap.c
>>> @@ -37,7 +37,7 @@ struct vmemmap_remap_walk {
>>>       struct list_head        *vmemmap_pages;
>>>   };
>>>
>>> -static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start)
>>> +static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start, bool flush)
>>>   {
>>>       pmd_t __pmd;
>>>       int i;
>>> @@ -80,7 +80,8 @@ static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start)
>>>               /* Make pte visible before pmd. See comment in pmd_install(). */
>>>               smp_wmb();
>>>               pmd_populate_kernel(&init_mm, pmd, pgtable);
>>> -             flush_tlb_kernel_range(start, start + PMD_SIZE);
>>> +             if (flush)
>>> +                     flush_tlb_kernel_range(start, start + PMD_SIZE);
>>>       } else {
>>>               pte_free_kernel(&init_mm, pgtable);
>>>       }
>>> @@ -127,11 +128,20 @@ static int vmemmap_pmd_range(pud_t *pud, unsigned long addr,
>>>       do {
>>>               int ret;
>>>
>>> -             ret = split_vmemmap_huge_pmd(pmd, addr & PMD_MASK);
>>> +             ret = split_vmemmap_huge_pmd(pmd, addr & PMD_MASK,
>>> +                             walk->remap_pte != NULL);
>>
>> It is bettter to only make @walk->remap_pte indicate whether we should go
>> to the last page table level. I suggest reusing VMEMMAP_NO_TLB_FLUSH
>> to indicate whether we should flush the TLB at pmd level. It'll be more
>> clear.
>>
>>>               if (ret)
>>>                       return ret;
>>>
>>>               next = pmd_addr_end(addr, end);
>>> +
>>> +             /*
>>> +              * We are only splitting, not remapping the hugetlb vmemmap
>>> +              * pages.
>>> +              */
>>> +             if (!walk->remap_pte)
>>> +                     continue;
>>> +
>>>               vmemmap_pte_range(pmd, addr, next, walk);
>>>       } while (pmd++, addr = next, addr != end);
>>>
>>> @@ -198,7 +208,8 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end,
>>>                       return ret;
>>>       } while (pgd++, addr = next, addr != end);
>>>
>>> -     flush_tlb_kernel_range(start, end);
>>> +     if (walk->remap_pte)
>>> +             flush_tlb_kernel_range(start, end);
>>>
>>>       return 0;
>>>   }
>>> @@ -297,6 +308,35 @@ static void vmemmap_restore_pte(pte_t *pte, unsigned long addr,
>>>       set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot));
>>>   }
>>>
>>> +/**
>>> + * vmemmap_remap_split - split the vmemmap virtual address range [@start, @end)
>>> + *                      backing PMDs of the directmap into PTEs
>>> + * @start:     start address of the vmemmap virtual address range that we want
>>> + *             to remap.
>>> + * @end:       end address of the vmemmap virtual address range that we want to
>>> + *             remap.
>>> + * @reuse:     reuse address.
>>> + *
>>> + * Return: %0 on success, negative error code otherwise.
>>> + */
>>> +static int vmemmap_remap_split(unsigned long start, unsigned long end,
>>> +                             unsigned long reuse)
>>> +{
>>> +     int ret;
>>> +     struct vmemmap_remap_walk walk = {
>>> +             .remap_pte      = NULL,
>>> +     };
>>> +
>>> +     /* See the comment in the vmemmap_remap_free(). */
>>> +     BUG_ON(start - reuse != PAGE_SIZE);
>>> +
>>> +     mmap_read_lock(&init_mm);
>>> +     ret = vmemmap_remap_range(reuse, end, &walk);
>>> +     mmap_read_unlock(&init_mm);
>>> +
>>> +     return ret;
>>> +}
>>> +
>>>   /**
>>>    * vmemmap_remap_free - remap the vmemmap virtual address range [@start, @end)
>>>    *                  to the page which @reuse is mapped to, then free vmemmap
>>> @@ -602,11 +642,35 @@ void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head)
>>>       free_vmemmap_page_list(&vmemmap_pages);
>>>   }
>>>
>>> +static void hugetlb_vmemmap_split(const struct hstate *h, struct page *head)
>>> +{
>>> +     unsigned long vmemmap_start = (unsigned long)head, vmemmap_end;
>>> +     unsigned long vmemmap_reuse;
>>> +
>>> +     if (!vmemmap_should_optimize(h, head))
>>> +             return;
>>> +
>>> +     vmemmap_end     = vmemmap_start + hugetlb_vmemmap_size(h);
>>> +     vmemmap_reuse   = vmemmap_start;
>>> +     vmemmap_start   += HUGETLB_VMEMMAP_RESERVE_SIZE;
>>> +
>>> +     /*
>>> +      * Split PMDs on the vmemmap virtual address range [@vmemmap_start,
>>> +      * @vmemmap_end]
>>> +      */
>>> +     vmemmap_remap_split(vmemmap_start, vmemmap_end, vmemmap_reuse);
>>> +}
>>> +
>>>   void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list)
>>>   {
>>>       struct folio *folio;
>>>       LIST_HEAD(vmemmap_pages);
>>>
>>> +     list_for_each_entry(folio, folio_list, lru)
>>> +             hugetlb_vmemmap_split(h, &folio->page);
>>
>> Maybe it is reasonable to add a return value to hugetlb_vmemmap_split()
>> to indicate whether it has done successfully, if it fails, it must be
>> OOM, in which case, there is no sense to continue to split the page table
>> and optimize the vmemmap pages subsequently, right?
> 
> Sorry, it is reasonable to continue to optimize the vmemmap pages
> subsequently since it should succeed because those vmemmap pages
> have been split successfully previously.
> 
> Seems we should continue to optimize vmemmap once hugetlb_vmemmap_split()
> fails, then we will have more memory to continue to split. 

Good point

> But it will
> make hugetlb_vmemmap_optimize_folios() a little complex. I'd like to
> hear you guys' opinions here.
> 
I think it won't add that much complexity if we don't optimize too much of the
slowpath (when we are out of memory). In the batch freeing patch we could
additionally test the return value of __hugetlb_vmemmap_optimize() for ENOMEM
and free the currently stored vmemmap_pages (if any), and keep iterating the
optimize loop. Should be simple enough and make this a bit more resilient to
that scenario. But we would need to keep the earlier check you commented above
(where we use @remap_pte to defer PMD flush).

> Thanks.
> 
>>
>> Thanks.
>>
>>> +
>>> +     flush_tlb_all();
>>> +
>>>       list_for_each_entry(folio, folio_list, lru)
>>>               __hugetlb_vmemmap_optimize(h, &folio->page, &vmemmap_pages);
>>>
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ