[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <551436F2.5020804@jp.fujitsu.com>
Date: Fri, 27 Mar 2015 01:42:26 +0900
From: Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Tejun Heo <tj@...nel.org>, Gu Zheng <guz.fnst@...fujitsu.com>
CC: <linux-kernel@...r.kernel.org>, <laijs@...fujitsu.com>,
<isimatu.yasuaki@...fujitsu.com>, <tangchen@...fujitsu.com>,
<izumi.taku@...fujitsu.com>
Subject: Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed
On 2015/03/27 0:18, Tejun Heo wrote:
> Hello,
>
> On Thu, Mar 26, 2015 at 01:04:00PM +0800, Gu Zheng wrote:
>> wq generates the numa affinity (pool->node) for all the possible cpu's
>> per cpu workqueue at init stage, that means the affinity of currently un-present
>> ones' may be incorrect, so we need to update the pool->node for the new added cpu
>> to the correct node when preparing online, otherwise it will try to create worker
>> on invalid node if node hotplug occurred.
>
> If the mapping is gonna be static once the cpus show up, any chance we
> can initialize that for all possible cpus during boot?
>
I think the kernel can define all possible
cpuid <-> lapicid <-> pxm <-> nodeid
mapping at boot with using firmware table information.
One concern is current x86 logic for memory-less node v.s. memory hotplug.
(as I explained before)
My idea is
step1. build all possible mapping at boot cpuid <-> apicid <-> pxm <-> node id at boot.
But this may be overwritten by x86's memory less node logic. So,
step2. check node is online or not before calling kmalloc. If offline, use -1.
rather than updating workqueue's attribute.
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists