[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <548E3948.7040105@cn.fujitsu.com>
Date: Mon, 15 Dec 2014 09:28:40 +0800
From: Lai Jiangshan <laijs@...fujitsu.com>
To: Tejun Heo <tj@...nel.org>
CC: <linux-kernel@...r.kernel.org>,
Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>,
"Gu, Zheng" <guz.fnst@...fujitsu.com>,
tangchen <tangchen@...fujitsu.com>,
Hiroyuki KAMEZAWA <kamezawa.hiroyu@...fujitsu.com>
Subject: Re: [PATCH 4/5] workqueue: update NUMA affinity for the node lost
CPU
On 12/13/2014 01:27 AM, Tejun Heo wrote:
> On Fri, Dec 12, 2014 at 06:19:54PM +0800, Lai Jiangshan wrote:
>> We fixed the major cases when the numa mapping is changed.
>>
>> We still have the assumption that when the node<->cpu mapping is changed
>> the original node is offline, and the current code of memory-hutplug also
>> prove this.
>>
>> This assumption might be changed in future and the orig_node is still online
>> in some cases. And in these cases, the cpumask of the pwqs of the orig_node
>> still contains the onlining CPU which is a CPU of another node, and the worker
>> may run on the onlining CPU (aka run on the wrong node).
>>
>> So we drop this assumption and make the code calls wq_update_unbound_numa()
>> to update the affinity in this case.
>
> This is seriously obfuscating. I really don't think meddling with
> existing pools is a good idea.
> The foundation those pools were standing are gone.
This statement is not true unless we write some code to force them,
dequeue them from the unbound_pool_hash, for example.
> Drain and discard the pools. Please don't try to
> retro-fit it to new foundations.
>
> Thanks.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists