lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <icrdkacpdksofftv5jwrwcgojsa7qnby4iuvxsdktuxazivhks@ajcy2shag4nz>
Date: Fri, 8 Mar 2024 12:11:41 -0500
From: Daniel Jordan <daniel.m.jordan@...cle.com>
To: Gang Li <gang.li@...ux.dev>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
        David Hildenbrand <david@...hat.com>,
        David Rientjes <rientjes@...gle.com>,
        Muchun Song <muchun.song@...ux.dev>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Steffen Klassert <steffen.klassert@...unet.com>,
        Jane Chu <jane.chu@...cle.com>,
        "Paul E . McKenney" <paulmck@...nel.org>,
        Randy Dunlap <rdunlap@...radead.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, ligang.bdlg@...edance.com
Subject: Re: [PATCH v6 7/8] hugetlb: parallelize 2M hugetlb allocation and
 initialization

Hi,

On Thu, Feb 22, 2024 at 10:04:20PM +0800, Gang Li wrote:
> By distributing both the allocation and the initialization tasks across
> multiple threads, the initialization of 2M hugetlb will be faster,
> thereby improving the boot speed.
> 
> Here are some test results:
>       test case        no patch(ms)   patched(ms)   saved
>  ------------------- -------------- ------------- --------
>   256c2T(4 node) 2M           3336          1051   68.52%
>   128c1T(2 node) 2M           1943           716   63.15%

Great improvement, and glad to see the multithreading is useful here.

>  static unsigned long __init hugetlb_pages_alloc_boot(struct hstate *h)
>  {
> -	unsigned long i;
> -	struct folio *folio;
> -	LIST_HEAD(folio_list);
> -	nodemask_t node_alloc_noretry;
> -
> -	/* Bit mask controlling how hard we retry per-node allocations.*/
> -	nodes_clear(node_alloc_noretry);
> +	struct padata_mt_job job = {
> +		.fn_arg		= h,
> +		.align		= 1,
> +		.numa_aware	= true
> +	};
>  
> -	for (i = 0; i < h->max_huge_pages; ++i) {
> -		folio = alloc_pool_huge_folio(h, &node_states[N_MEMORY],
> -						&node_alloc_noretry);
> -		if (!folio)
> -			break;
> -		list_add(&folio->lru, &folio_list);
> -		cond_resched();
> -	}
> +	job.thread_fn	= hugetlb_pages_alloc_boot_node;
> +	job.start	= 0;
> +	job.size	= h->max_huge_pages;
>  
> -	prep_and_add_allocated_folios(h, &folio_list);
> +	/*
> +	 * job.max_threads is twice the num_node_state(N_MEMORY),
> +	 *
> +	 * Tests below indicate that a multiplier of 2 significantly improves
> +	 * performance, and although larger values also provide improvements,
> +	 * the gains are marginal.
> +	 *
> +	 * Therefore, choosing 2 as the multiplier strikes a good balance between
> +	 * enhancing parallel processing capabilities and maintaining efficient
> +	 * resource management.
> +	 *
> +	 * +------------+-------+-------+-------+-------+-------+
> +	 * | multiplier |   1   |   2   |   3   |   4   |   5   |
> +	 * +------------+-------+-------+-------+-------+-------+
> +	 * | 256G 2node | 358ms | 215ms | 157ms | 134ms | 126ms |
> +	 * | 2T   4node | 979ms | 679ms | 543ms | 489ms | 481ms |
> +	 * | 50G  2node | 71ms  | 44ms  | 37ms  | 30ms  | 31ms  |
> +	 * +------------+-------+-------+-------+-------+-------+
> +	 */
> +	job.max_threads	= num_node_state(N_MEMORY) * 2;
> +	job.min_chunk	= h->max_huge_pages / num_node_state(N_MEMORY) / 2;

For a single huge page, we get min_chunk of 0.  padata doesn't
explicitly handle that, but 'align' happens to save us from div by 0
later on.  It's an odd case, something to fix if there were another
version.


Not sure what efficient resource management means here.  Avoiding lock
contention?  The system is waiting on this initialization to start pid
1.  On big systems, most CPUs will be idle, so why not use available
resources to optimize it more?  max_threads could scale with CPU count
rather than a magic multiplier.

With that said, the major gain is already there, so either way,

Acked-by: Daniel Jordan <daniel.m.jordan@...cle.com> # padata

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ