lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d0004780-0716-cc21-de66-e1ec6d257735@bytedance.com>
Date:   Sun, 30 Jul 2023 23:28:50 +0100
From:   Usama Arif <usama.arif@...edance.com>
To:     linux-mm@...ck.org, muchun.song@...ux.dev, mike.kravetz@...cle.com,
        rppt@...nel.org
Cc:     linux-kernel@...r.kernel.org, fam.zheng@...edance.com,
        liangma@...ngbit.com, simon.evans@...edance.com,
        punit.agrawal@...edance.com
Subject: Re: [v2 0/6] mm/memblock: Skip prep and initialization of struct
 pages freed later by HVO



On 30/07/2023 16:16, Usama Arif wrote:
> If the region is for gigantic hugepages and if HVO is enabled, then those
> struct pages which will be freed later by HVO don't need to be prepared and
> initialized. This can save significant time when a large number of hugepages
> are allocated at boot time.
> 
> For a 1G hugepage, this series avoid initialization and preparation of
> 262144 - 64 = 262080 struct pages per hugepage.
> 
> When tested on a 512G system (which can allocate max 500 1G hugepages), the
> kexec-boot time with HVO and DEFERRED_STRUCT_PAGE_INIT enabled without this
> patchseries to running init is 3.9 seconds. With this patch it is 1.2 seconds.
> This represents an approximately 70% reduction in boot time and will
> significantly reduce server downtime when using a large number of
> gigantic pages.
> 
> Thanks,
> Usama
> 

There were build errors reported by kernel-bot when 
CONFIG_HUGETLBFS/CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP is disabled due to 
patches 5 and 6 which should be fixed by below diff. Will wait for 
review and include it in next revision as its a trivial diff

diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 3fff6f611c19..285b59b71203 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -38,6 +38,8 @@ static inline unsigned int 
hugetlb_vmemmap_optimizable_size(const struct hstate
                 return 0;
         return size > 0 ? size : 0;
  }
+
+extern bool vmemmap_optimize_enabled;
  #else
  static inline int hugetlb_vmemmap_restore(const struct hstate *h, 
struct page *head)
  {
@@ -58,6 +60,8 @@ static inline bool vmemmap_should_optimize(const 
struct hstate *h, const struct
         return false;
  }

+static bool vmemmap_optimize_enabled = false;
+
  #endif /* CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP */

  static inline bool hugetlb_vmemmap_optimizable(const struct hstate *h)
@@ -65,6 +69,4 @@ static inline bool hugetlb_vmemmap_optimizable(const 
struct hstate *h)
         return hugetlb_vmemmap_optimizable_size(h) != 0;
  }

-extern bool vmemmap_optimize_enabled;
-
  #endif /* _LINUX_HUGETLB_VMEMMAP_H */
diff --git a/mm/internal.h b/mm/internal.h
index 692bb1136a39..c3321afa36cb 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1106,7 +1106,7 @@ struct vma_prepare {
  #ifdef CONFIG_HUGETLBFS
  void __init hugetlb_hstate_alloc_gigantic_pages(void);
  #else
-static inline void __init hugetlb_hstate_alloc_gigantic_pages(void);
+static inline void __init hugetlb_hstate_alloc_gigantic_pages(void)
  {
  }
  #endif /* CONFIG_HUGETLBFS */


> [v1->v2]:
> - (Mike Rapoport) Code quality improvements (function names, arguments,
> comments).
> 
> [RFC->v1]:
> - (Mike Rapoport) Change from passing hugepage_size in
> memblock_alloc_try_nid_raw for skipping struct page initialization to
> using MEMBLOCK_RSRV_NOINIT flag
> 
> 
> 
> Usama Arif (6):
>    mm: hugetlb: Skip prep of tail pages when HVO is enabled
>    mm: hugetlb_vmemmap: Use nid of the head page to reallocate it
>    memblock: pass memblock_type to memblock_setclr_flag
>    memblock: introduce MEMBLOCK_RSRV_NOINIT flag
>    mm: move allocation of gigantic hstates to the start of mm_core_init
>    mm: hugetlb: Skip initialization of struct pages freed later by HVO
> 
>   include/linux/memblock.h |  9 +++++
>   mm/hugetlb.c             | 71 +++++++++++++++++++++++++---------------
>   mm/hugetlb_vmemmap.c     |  6 ++--
>   mm/hugetlb_vmemmap.h     | 18 +++++++---
>   mm/internal.h            |  9 +++++
>   mm/memblock.c            | 45 +++++++++++++++++--------
>   mm/mm_init.c             |  6 ++++
>   7 files changed, 118 insertions(+), 46 deletions(-)
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ