[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55191C3D.8070800@cn.fujitsu.com>
Date: Mon, 30 Mar 2015 17:49:49 +0800
From: Gu Zheng <guz.fnst@...fujitsu.com>
To: Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
CC: Tejun Heo <tj@...nel.org>, <linux-kernel@...r.kernel.org>,
<laijs@...fujitsu.com>, <isimatu.yasuaki@...fujitsu.com>,
<tangchen@...fujitsu.com>, <izumi.taku@...fujitsu.com>
Subject: Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed
Hi Kame-san,
On 03/27/2015 12:42 AM, Kamezawa Hiroyuki wrote:
> On 2015/03/27 0:18, Tejun Heo wrote:
>> Hello,
>>
>> On Thu, Mar 26, 2015 at 01:04:00PM +0800, Gu Zheng wrote:
>>> wq generates the numa affinity (pool->node) for all the possible cpu's
>>> per cpu workqueue at init stage, that means the affinity of currently un-present
>>> ones' may be incorrect, so we need to update the pool->node for the new added cpu
>>> to the correct node when preparing online, otherwise it will try to create worker
>>> on invalid node if node hotplug occurred.
>>
>> If the mapping is gonna be static once the cpus show up, any chance we
>> can initialize that for all possible cpus during boot?
>>
>
> I think the kernel can define all possible
>
> cpuid <-> lapicid <-> pxm <-> nodeid
>
> mapping at boot with using firmware table information.
Could you explain more?
Regards,
Gu
>
> One concern is current x86 logic for memory-less node v.s. memory hotplug.
> (as I explained before)
>
> My idea is
> step1. build all possible mapping at boot cpuid <-> apicid <-> pxm <-> node id at boot.
>
> But this may be overwritten by x86's memory less node logic. So,
> step2. check node is online or not before calling kmalloc. If offline, use -1.
> rather than updating workqueue's attribute.
>
> Thanks,
> -Kame
>
> .
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists