lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 11 Jan 2024 09:50:20 -0800
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Gang Li <gang.li@...ux.dev>, David Hildenbrand <david@...hat.com>, David
 Rientjes <rientjes@...gle.com>, Mike Kravetz <mike.kravetz@...cle.com>,
 Muchun Song <muchun.song@...ux.dev>, Andrew Morton
 <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
 ligang.bdlg@...edance.com
Subject: Re: [PATCH v3 3/7] padata: dispatch works on different nodes

On Tue, 2024-01-02 at 21:12 +0800, Gang Li wrote:
> When a group of tasks that access different nodes are scheduled on the
> same node, they may encounter bandwidth bottlenecks and access latency.
> 
> Thus, numa_aware flag is introduced here, allowing tasks to be
> distributed across different nodes to fully utilize the advantage of
> multi-node systems.
> 
> Signed-off-by: Gang Li <gang.li@...ux.dev>
> ---
>  include/linux/padata.h | 3 +++
>  kernel/padata.c        | 8 ++++++--
>  mm/mm_init.c           | 1 +
>  3 files changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/padata.h b/include/linux/padata.h
> index 495b16b6b4d72..f79ccd50e7f40 100644
> --- a/include/linux/padata.h
> +++ b/include/linux/padata.h
> @@ -137,6 +137,8 @@ struct padata_shell {
>   *             appropriate for one worker thread to do at once.
>   * @max_threads: Max threads to use for the job, actual number may be less
>   *               depending on task size and minimum chunk size.
> + * @numa_aware: Dispatch jobs to different nodes. If a node only has memory but
> + *              no CPU, dispatch its jobs to a random CPU.
>   */
>  struct padata_mt_job {
>  	void (*thread_fn)(unsigned long start, unsigned long end, void *arg);
> @@ -146,6 +148,7 @@ struct padata_mt_job {
>  	unsigned long		align;
>  	unsigned long		min_chunk;
>  	int			max_threads;
> +	bool			numa_aware;
>  };
>  
>  /**
> diff --git a/kernel/padata.c b/kernel/padata.c
> index 179fb1518070c..1c2b3a337479e 100644
> --- a/kernel/padata.c
> +++ b/kernel/padata.c
> @@ -485,7 +485,7 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
>  	struct padata_work my_work, *pw;
>  	struct padata_mt_job_state ps;
>  	LIST_HEAD(works);
> -	int nworks;
> +	int nworks, nid = 0;

If we always start from 0, we may be biased towards the low numbered node,
and not use high numbered nodes at all.  Suggest you do
static nid = 0;  

>  
>  	if (job->size == 0)
>  		return;
> @@ -517,7 +517,11 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
>  	ps.chunk_size = roundup(ps.chunk_size, job->align);
>  
>  	list_for_each_entry(pw, &works, pw_list)
> -		queue_work(system_unbound_wq, &pw->pw_work);
> +		if (job->numa_aware)
> +			queue_work_node((++nid % num_node_state(N_MEMORY)),
> +					system_unbound_wq, &pw->pw_work);

I think we should use nid = next_node(nid, node_states[N_CPU]) instead of
++nid % num_node_state(N_MEMORY).  You are picking the next node with CPU
to handle the job.

Tim

> +		else
> +			queue_work(system_unbound_wq, &pw->pw_work);
>  
>  	/* Use the current thread, which saves starting a workqueue worker. */
>  	padata_work_init(&my_work, padata_mt_helper, &ps, PADATA_WORK_ONSTACK);
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index 89dc29f1e6c6f..59fcffddf65a3 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -2225,6 +2225,7 @@ static int __init deferred_init_memmap(void *data)
>  			.align       = PAGES_PER_SECTION,
>  			.min_chunk   = PAGES_PER_SECTION,
>  			.max_threads = max_threads,
> +			.numa_aware  = false,
>  		};
>  
>  		padata_do_multithreaded(&job);


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ