[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <54ebacf1-1249-cc6a-80a5-b293e581f401@linux.vnet.ibm.com>
Date: Fri, 2 Jun 2017 00:24:54 -0500
From: Michael Bringmann <mwb@...ux.vnet.ibm.com>
To: Michael Ellerman <mpe@...erman.id.au>,
Reza Arbab <arbab@...ux.vnet.ibm.com>
Cc: Balbir Singh <bsingharora@...il.com>, linux-kernel@...r.kernel.org,
Paul Mackerras <paulus@...ba.org>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
Bharata B Rao <bharata@...ux.vnet.ibm.com>,
Shailendra Singh <shailendras@...dia.com>,
Thomas Gleixner <tglx@...utronix.de>,
linuxppc-dev@...ts.ozlabs.org,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Michael Bringmann from Kernel Team <mbringm@...ibm.com>
Subject: Re: [Patch 2/2]: powerpc/hotplug/mm: Fix hot-add memory node assoc
On 06/01/2017 04:36 AM, Michael Ellerman wrote:
> Michael Bringmann <mwb@...ux.vnet.ibm.com> writes:
>
>> On 05/29/2017 12:32 AM, Michael Ellerman wrote:
>>> Reza Arbab <arbab@...ux.vnet.ibm.com> writes:
>>>
>>>> On Fri, May 26, 2017 at 01:46:58PM +1000, Michael Ellerman wrote:
>>>>> Reza Arbab <arbab@...ux.vnet.ibm.com> writes:
>>>>>
>>>>>> On Thu, May 25, 2017 at 04:19:53PM +1000, Michael Ellerman wrote:
>>>>>>> The commit message for 3af229f2071f says:
>>>>>>>
>>>>>>> In practice, we never see a system with 256 NUMA nodes, and in fact, we
>>>>>>> do not support node hotplug on power in the first place, so the nodes
>>>>>>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>>>>>> that are online when we come up are the nodes that will be present for
>>>>>>> the lifetime of this kernel.
>>>>>>>
>>>>>>> Is that no longer true?
>>>>>>
>>>>>> I don't know what the reasoning behind that statement was at the time,
>>>>>> but as far as I can tell, the only thing missing for node hotplug now is
>>>>>> Balbir's patchset [1]. He fixes the resource issue which motivated
>>>>>> 3af229f2071f and reverts it.
>>>>>>
>>>>>> With that set, I can instantiate a new numa node just by doing
>>>>>> add_memory(nid, ...) where nid doesn't currently exist.
>>>>>
>>>>> But does that actually happen on any real system?
>>>>
>>>> I don't know if anything currently tries to do this. My interest in
>>>> having this working is so that in the future, our coherent gpu memory
>>>> could be added as a distinct node by the device driver.
>>>
>>> Sure. If/when that happens, we would hopefully still have some way to
>>> limit the size of the possible map.
>>>
>>> That would ideally be a firmware property that tells us the maximum
>>> number of GPUs that might be hot-added, or we punt and cap it at some
>>> "sane" maximum number.
>>>
>>> But until that happens it's silly to say we can have up to 256 nodes
>>> when in practice most of our systems have 8 or less.
>>>
>>> So I'm still waiting for an explanation from Michael B on how he's
>>> seeing this bug in practice.
>>
>> I already answered this in an earlier message.
>
> Which one? I must have missed it.
>
>> I will give an example.
>>
>> * Let there be a configuration with nodes (0, 4-5, 8) that boots with 1 VP
>> and 10G of memory in a shared processor configuration.
>> * At boot time, 4 nodes are put into the possible map by the PowerPC boot
>> code.
>
> I'm pretty sure we never add nodes to the possible map, it starts out
> with MAX_NUMNODES possible and that's it.
Let me reword that. It enables the nodes in the possible map.
>
> Do you actually see mention of nodes 0 and 8 in the dmesg?
When the 'numa.c' code is built with debug messages, and the system was
given that configuration by pHyp, yes, I did.
>
> What does it say?
The debug message for each core thread would be something like,
removing cpu 64 from node 0
adding cpu 64 to node 8
repeated for all 8 threads of the CPU, and usually with the messages
for all of the CPUs coming out intermixed on the console/dmesg log.
>
>> * Subsequently, the NUMA code executes and puts the 10G memory into nodes
>> 4 & 5. No memory goes into Node 0. So we now have 2 nodes in the
>> node_online_map.
>> * The VP and its threads get assigned to Node 4.
>> * Then when 'initmem_init()' in 'powerpc/numa.c' executes the instruction,
>> node_and(node_possible_map, node_possible_map, node_online_map);
>> the content of the node_possible_map is reduced to nodes 4-5.
>> * Later on we hot-add 90G of memory to the system. It tries to put the
>> memory into nodes 0, 4-5, 8 based on the memory association map. We
>> should see memory put into all 4 nodes. However, since we have reduced
>> the 'node_possible_map' to only nodes 4 & 5, we can now only put memory
>> into 2 of the configured nodes.
>
> Right. So it's not that you're hot adding memory into a previously
> unseen node as you implied in earlier mails.
In the sense that the nodes were defined in the device tree, that is correct.
In the sense that those nodes are currently deleted from node_possible_map in
'numa.c' by the instruction 'node_and(node_possible_map,node_possible_map,
node_online_map);', the nodes are no longer available to place memory or CPU.
>> # We want to be able to put memory into all 4 nodes via hot-add operations,
>> not only the nodes that 'survive' boot time initialization. We could
>> make a number of changes to ensure that all of the nodes in the initial
>> configuration provided by the pHyp can be used, but this one appears to
>> be the simplest, only using resources requested by the pHyp at boot --
>> even if those resource are not used immediately.
>
> I don't think that's what the patch does. It just marks 32 (!?) nodes as
> online. Or if you're talking about reverting 3af229f2071f that leaves
> you with 256 possible nodes. Both of which are wasteful> > The right fix is to make sure any nodes which are present at boot remain
> in the possible map, even if they don't have memory/CPUs assigned at
> boot.
Okay, I can try to insert code that extracts all of the nodes from the
ibm,associativity-lookup-arrays property and merge them with the nodes
put into the online map from the CPUs that were found previously during
boot of the powerpc code.
> What does your device tree look like? Can you send us the output of:
>
> $ lsprop /proc/device-tree
See attachment 'device-tree.log'. Note though that this boot of my test
system only has 2 nodes, 0 and 2.
>
> cheers
>
--
Michael W. Bringmann
Linux Technology Center
IBM Corporation
Tie-Line 363-5196
External: (512) 286-5196
Cell: (512) 466-0650
mwb@...ux.vnet.ibm.com
View attachment "device-tree.log" of type "text/x-log" (50726 bytes)
Powered by blists - more mailing lists