[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <548E411C.3030102@cn.fujitsu.com>
Date: Mon, 15 Dec 2014 10:02:04 +0800
From: Lai Jiangshan <laijs@...fujitsu.com>
To: Tejun Heo <tj@...nel.org>
CC: <linux-kernel@...r.kernel.org>,
Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>,
"Gu, Zheng" <guz.fnst@...fujitsu.com>,
tangchen <tangchen@...fujitsu.com>,
Hiroyuki KAMEZAWA <kamezawa.hiroyu@...fujitsu.com>
Subject: Re: [PATCH 2/5] workqueue: update wq_numa_possible_cpumask
On 12/13/2014 01:18 AM, Tejun Heo wrote:
> On Fri, Dec 12, 2014 at 06:19:52PM +0800, Lai Jiangshan wrote:
> ...
>> +static void wq_update_numa_mapping(int cpu)
>> +{
>> + int node, orig_node = NUMA_NO_NODE, new_node = cpu_to_node(cpu);
>> +
>> + lockdep_assert_held(&wq_pool_mutex);
>> +
>> + if (!wq_numa_enabled)
>> + return;
>> +
>> + /* the node of onlining CPU is not NUMA_NO_NODE */
>> + if (WARN_ON(new_node == NUMA_NO_NODE))
>> + return;
>> +
>> + /* test whether the NUMA node mapping is changed. */
>> + if (cpumask_test_cpu(cpu, wq_numa_possible_cpumask[new_node]))
>> + return;
>> +
>> + /* find the origin node */
>> + for_each_node(node) {
>> + if (cpumask_test_cpu(cpu, wq_numa_possible_cpumask[node])) {
>> + orig_node = node;
>> + break;
>> + }
>> + }
>> +
>> + /* there may be multi mappings changed, re-initial. */
>> + cpumask_clear(wq_numa_possible_cpumask[new_node]);
>> + if (orig_node != NUMA_NO_NODE)
>> + cpumask_clear(wq_numa_possible_cpumask[orig_node]);
>> + for_each_possible_cpu(cpu) {
>> + node = cpu_to_node(node);
>> + if (node == new_node)
>> + cpumask_set_cpu(cpu, wq_numa_possible_cpumask[new_node]);
>> + else if (orig_node != NUMA_NO_NODE && node == orig_node)
>> + cpumask_set_cpu(cpu, wq_numa_possible_cpumask[orig_node]);
>> + }
>> +}
>
> Let's please move this to NUMA code and properly update it on actual
> mapping changes.
>
Hi, TJ
I didn't get your means. What did you mean "NUMA code"? Which one did you mean?
1) "NUMA code" = system's NUMA memory hotplug code, AKA, keep the numa mapping stable
I think this is the better idea. This idea came to my mind immediately at the time
I received the bug report. And after some discussions, I was told that it is too HARD
to keep the numa mapping stable across multiple physical system-board/node online/offline.
This idea makes the assumption "the numa mapping is stable after system booted" as
a restriction of the NUMA. And it will favor all the code outside of the numa code,
otherwise (we deny the assumption like this patchset) all the code which use
"cpu_to_node()" and cache the return value will have to be fixed up like this patchset.
Hi, hotplug-team, any idea to keep the numa mapping stable?
2) "NUMA code" = workqueue's NUMA code
I think I already did it, the code I added was right below the code of
wq_update_unbound_numa(). Or I missed something?
Thanks,
Lai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists