[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1cfdf574-90cf-4143-b735-7b8354098e6d@linux.dev>
Date: Thu, 18 Jan 2024 14:15:24 +0800
From: Gang Li <gang.li@...ux.dev>
To: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>, David Rientjes
<rientjes@...gle.com>, linux-kernel@...r.kernel.org,
ligang.bdlg@...edance.com, David Hildenbrand <david@...hat.com>,
Muchun Song <muchun.song@...ux.dev>, Gang Li <gang.li@...ux.dev>
Subject: Re: [PATCH v3 3/7] padata: dispatch works on different nodes
Hi Tim,
On 2024/1/18 06:14, Tim Chen wrote:
> On Mon, 2024-01-15 at 16:57 +0800, Gang Li wrote:
>> How about:
>> ```
>> nid = global_nid;
>> list_for_each_entry(pw, &works, pw_list)
>> if (job->numa_aware) {
>> int old_node = nid;
>> queue_work_node(nid, system_unbound_wq, &pw->pw_work);
>> nid = next_node(nid, node_states[N_CPU]);
>> cmpxchg(&global_nid, old_node, nid);
>> } else
>> queue_work(system_unbound_wq, &pw->pw_work);
>>
>> ```
>>
My original idea was to have all tasks from a single
padata_do_multithreaded distributed continuously across NUMA nodes.
In that case, the task distribution would be predictable for a single
padata_do_multithreaded call.
>
> I am thinking something like
>
> static volatile atomic_t last_used_nid;
>
> list_for_each_entry(pw, &works, pw_list)
> if (job->numa_aware) {
> int old_node = atomic_read(&last_used_nid);
>
> do {
> nid = next_node_in(old_node, node_states[N_CPU]);
> } while (!atomic_try_cmpxchg(&last_used_nid, &old_node, nid));
However, having the tasks from all parallel padata_do_multithreaded
globally distributed across NUMA nodes is also fine by me.
I don't have a strong preference.
> queue_work_node(nid, system_unbound_wq, &pw->pw_work);
> } else {
> queue_work(system_unbound_wq, &pw->pw_work);
> }
>
> Note that we need to use next_node_in so we'll wrap around the node mask.
>
Powered by blists - more mailing lists