[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <76becfc1-e609-e3e8-2966-4053143170b6@google.com>
Date: Sun, 24 Dec 2023 21:21:03 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: Gang Li <ligang.bdlg@...edance.com>
cc: Mike Kravetz <mike.kravetz@...cle.com>, Gang Li <gang.li@...ux.dev>,
David Hildenbrand <david@...hat.com>, Muchun Song <muchun.song@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2 0/5] hugetlb: parallelize hugetlb page init on
boot
On Thu, 21 Dec 2023, David Rientjes wrote:
> > Hi,
> >
> > On 2023/12/13 08:10, David Rientjes wrote:
> > > On 6.6 I measured "hugepagesz=1G hugepages=11776" on as 12TB host to be
> > > 77s this time around.
> >
> > Thanks for your test! Is this the total kernel boot time, or just the
> > hugetlb initialization time?
> >
>
> Ah, sorry for not being specific. It's just the hugetlb preallocation of
> 11776 1GB hugetlb pages, total boot takes a few more minutes.
>
I had to apply this to get the patch series to compile on 6.7-rc7:
diff --git a/kernel/padata.c b/kernel/padata.c
--- a/kernel/padata.c
+++ b/kernel/padata.c
@@ -485,7 +485,7 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
struct padata_work my_work, *pw;
struct padata_mt_job_state ps;
LIST_HEAD(works);
- int nworks, nid;
+ int nworks, nid = 0;
if (job->size == 0)
return;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3300,7 +3300,7 @@ int alloc_bootmem_huge_page(struct hstate *h, int nid)
int __alloc_bootmem_huge_page(struct hstate *h, int nid)
{
struct huge_bootmem_page *m = NULL; /* initialize for clang */
- int nr_nodes, node;
+ int nr_nodes, node = NUMA_NO_NODE;
/* do node specific alloc */
if (nid != NUMA_NO_NODE) {
With that, I compared "hugepagesz=1G hugepages=11776" before and after on
a 12TB host with eight NUMA nodes.
Compared to 77s of total initialization time before, with this series I
measured 18.3s.
Feel free to add this into the changelog once the initialization issues
are fixed up and I'm happy to ack it.
Thanks!
Powered by blists - more mailing lists