lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <57BC1D0C-23B2-4363-8B14-9602B69D53D5@linux.dev>
Date:   Wed, 20 Sep 2023 11:05:30 +0800
From:   Muchun Song <muchun.song@...ux.dev>
To:     Mike Kravetz <mike.kravetz@...cle.com>
Cc:     Linux-MM <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>,
        Muchun Song <songmuchun@...edance.com>,
        Joao Martins <joao.m.martins@...cle.com>,
        Oscar Salvador <osalvador@...e.de>,
        David Hildenbrand <david@...hat.com>,
        Miaohe Lin <linmiaohe@...wei.com>,
        David Rientjes <rientjes@...gle.com>,
        Anshuman Khandual <anshuman.khandual@....com>,
        Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
        Barry Song <21cnbao@...il.com>, Michal Hocko <mhocko@...e.com>,
        Matthew Wilcox <willy@...radead.org>,
        Xiongchun Duan <duanxiongchun@...edance.com>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v4 3/8] hugetlb: perform vmemmap optimization on a list of
 pages



> On Sep 20, 2023, at 04:49, Mike Kravetz <mike.kravetz@...cle.com> wrote:
> 
> On 09/19/23 11:10, Muchun Song wrote:
>> 
>> 
>> On 2023/9/19 07:01, Mike Kravetz wrote:
>>> When adding hugetlb pages to the pool, we first create a list of the
>>> allocated pages before adding to the pool.  Pass this list of pages to a
>>> new routine hugetlb_vmemmap_optimize_folios() for vmemmap optimization.
>>> 
>>> Due to significant differences in vmemmmap initialization for bootmem
>>> allocated hugetlb pages, a new routine prep_and_add_bootmem_folios
>>> is created.
>>> 
>>> We also modify the routine vmemmap_should_optimize() to check for pages
>>> that are already optimized.  There are code paths that might request
>>> vmemmap optimization twice and we want to make sure this is not
>>> attempted.
>>> 
>>> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
>>> ---
>>>  mm/hugetlb.c         | 50 +++++++++++++++++++++++++++++++++++++-------
>>>  mm/hugetlb_vmemmap.c | 11 ++++++++++
>>>  mm/hugetlb_vmemmap.h |  5 +++++
>>>  3 files changed, 58 insertions(+), 8 deletions(-)
>>> 
>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>> index 8624286be273..d6f3db3c1313 100644
>>> --- a/mm/hugetlb.c
>>> +++ b/mm/hugetlb.c
>>> @@ -2269,6 +2269,11 @@ static void prep_and_add_allocated_folios(struct hstate *h,
>>>  {
>>>   struct folio *folio, *tmp_f;
>>> + /*
>>> +  * Send list for bulk vmemmap optimization processing
>>> +  */
>> 
>> From the kernel development document, one-line comment format is "/**/".
>> 
> 
> Will change the comments introduced here.

BTW, there are some places as well, please updates all, thanks.

> 
>>> + hugetlb_vmemmap_optimize_folios(h, folio_list);
>>> +
>>>   /*
>>>    * Add all new pool pages to free lists in one lock cycle
>>>    */
>>> @@ -3309,6 +3314,40 @@ static void __init hugetlb_folio_init_vmemmap(struct folio *folio,
>>>   prep_compound_head((struct page *)folio, huge_page_order(h));
>>>  }
>>> +static void __init prep_and_add_bootmem_folios(struct hstate *h,
>>> + struct list_head *folio_list)
>>> +{
>>> + struct folio *folio, *tmp_f;
>>> +
>>> + /*
>>> +  * Send list for bulk vmemmap optimization processing
>>> +  */
>>> + hugetlb_vmemmap_optimize_folios(h, folio_list);
>>> +
>>> + /*
>>> +  * Add all new pool pages to free lists in one lock cycle
>>> +  */
>>> + spin_lock_irq(&hugetlb_lock);
>>> + list_for_each_entry_safe(folio, tmp_f, folio_list, lru) {
>>> + if (!folio_test_hugetlb_vmemmap_optimized(folio)) {
>>> + /*
>>> +  * If HVO fails, initialize all tail struct pages
>>> +  * We do not worry about potential long lock hold
>>> +  * time as this is early in boot and there should
>>> +  * be no contention.
>>> +  */
>>> + hugetlb_folio_init_tail_vmemmap(folio,
>>> + HUGETLB_VMEMMAP_RESERVE_PAGES,
>>> + pages_per_huge_page(h));
>>> + }
>>> + __prep_account_new_huge_page(h, folio_nid(folio));
>>> + enqueue_hugetlb_folio(h, folio);
>>> + }
>>> + spin_unlock_irq(&hugetlb_lock);
>>> +
>>> + INIT_LIST_HEAD(folio_list);
>> 
>> I'm not sure what is the purpose of the reinitialization to list head?
>> 
> 
> There really is no purpose.  This was copied from
> prep_and_add_allocated_folios which also has this unnecessary call.  It is
> unnecessary as enqueue_hugetlb_folio() will do a list_move for each
> folio on the list.  Therefore, at the end of the loop we KNOW the list
> is empty.

Right.

> 
> I will remove here and in prep_and_add_allocated_folios.

Thanks.

> 
> Thanks,
> -- 
> Mike Kravetz


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ