[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <bf54b057-0d97-4614-f88e-f95b9a8eb88a@cn.fujitsu.com>
Date: Wed, 27 Jul 2016 09:18:19 +0800
From: Dou Liyang <douly.fnst@...fujitsu.com>
To: "Rafael J. Wysocki" <rjw@...ysocki.net>
CC: Andrew Morton <akpm@...ux-foundation.org>, <cl@...ux.com>,
<tj@...nel.org>, <mika.j.penttila@...il.com>, <mingo@...hat.com>,
<hpa@...or.com>, <yasu.isimatu@...il.com>,
<isimatu.yasuaki@...fujitsu.com>, <kamezawa.hiroyu@...fujitsu.com>,
<izumi.taku@...fujitsu.com>, <gongzhaogang@...pur.com>,
<len.brown@...el.com>, <lenb@...nel.org>, <tglx@...utronix.de>,
<chen.tang@...ystack.cn>, <rafael@...nel.org>, <x86@...nel.org>,
<linux-acpi@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>
Subject: Re: [PATCH v9 0/7] Make cpuid <-> nodeid mapping persistent
Hi, RJ
在 2016年07月26日 19:53, Rafael J. Wysocki 写道:
> On Tuesday, July 26, 2016 11:59:38 AM Dou Liyang wrote:
>>
>> 在 2016年07月26日 07:20, Andrew Morton 写道:
>>> On Mon, 25 Jul 2016 16:35:42 +0800 Dou Liyang <douly.fnst@...fujitsu.com> wrote:
>>>
>>>> [Problem]
>>>>
>>>> cpuid <-> nodeid mapping is firstly established at boot time. And workqueue caches
>>>> the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time.
>>>>
>>>> When doing node online/offline, cpuid <-> nodeid mapping is established/destroyed,
>>>> which means, cpuid <-> nodeid mapping will change if node hotplug happens. But
>>>> workqueue does not update wq_numa_possible_cpumask.
>>>>
>>>> So here is the problem:
>>>>
>>>> Assume we have the following cpuid <-> nodeid in the beginning:
>>>>
>>>> Node | CPU
>>>> ------------------------
>>>> node 0 | 0-14, 60-74
>>>> node 1 | 15-29, 75-89
>>>> node 2 | 30-44, 90-104
>>>> node 3 | 45-59, 105-119
>>>>
>>>> and we hot-remove node2 and node3, it becomes:
>>>>
>>>> Node | CPU
>>>> ------------------------
>>>> node 0 | 0-14, 60-74
>>>> node 1 | 15-29, 75-89
>>>>
>>>> and we hot-add node4 and node5, it becomes:
>>>>
>>>> Node | CPU
>>>> ------------------------
>>>> node 0 | 0-14, 60-74
>>>> node 1 | 15-29, 75-89
>>>> node 4 | 30-59
>>>> node 5 | 90-119
>>>>
>>>> But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the like.
>>>>
>>>> When a pool workqueue is initialized, if its cpumask belongs to a node, its
>>>> pool->node will be mapped to that node. And memory used by this workqueue will
>>>> also be allocated on that node.
>>>
>>> Plan B is to hunt down and fix up all the workqueue structures at
>>> hotplug-time. Has that option been evaluated?
>>>
>>
>> Yes, the option has been evaluate in this patch:
>> http://www.gossamer-threads.com/lists/linux/kernel/2116748
>>
>>>
>>> Your fix is x86-only and this bug presumably affects other
>>> architectures, yes?I think a "Plan B" would fix all architectures?
>>>
>>
>> Yes, the bug may presumably affect few architectures which support CPU
>> hotplug and NUMA.
>>
>> We have sent the "Plan B" in our community and got a lot of advice and
>> ideas. Based on these suggestions, We carefully balance that two plan.
>> Then we choice the first.
>>
>>>
>>> Thirdly, what is the merge path for these patches? Is an x86
>>> or ACPI maintainer working with you on them?
>>
>> Yes, we get a lot of guidance and help from RJ who is an ACPI maintainer.
>
> FWIW, the patches are fine by me from the ACPI perspective.
>
> If you want me to apply them, though, ACKs from the x86 and mm maintainers
> will be necessary.
>
I will continue to investigate this bug and wait for maintainers's advices.
> Thanks,
> Rafael
>
>
>
Thanks.
Dou
Powered by blists - more mailing lists