[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <a50f35a1-ca1e-56b7-3a22-55cb3abdf093@linux.vnet.ibm.com>
Date: Mon, 19 Jun 2017 17:10:20 -0500
From: Michael Bringmann <mwb@...ux.vnet.ibm.com>
To: linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
Michael Bringmann <mwb@...ux.vnet.ibm.com>,
David Gibson <david@...son.dropbear.id.au>,
Reza Arbab <arbab@...ux.vnet.ibm.com>,
John Allen <jallen@...ux.vnet.ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Bharata B Rao <bharata@...ux.vnet.ibm.com>,
Shailendra Singh <shailendras@...dia.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Rashmica Gupta <rashmicy@...il.com>,
Ingo Molnar <mingo@...nel.org>
Subject: [PATCH V6 1/2] powerpc/hotplug: Ensure enough nodes avail for
operations
powerpc/hotplug: On systems like PowerPC which allow 'hot-add' of CPU
or memory resources, it may occur that the new resources are to be
inserted into nodes that were not used for these resources at bootup.
In the kernel, any node that is used must be defined and initialized
at boot. In order to meet both needs, this patch adds a new kernel
command line option (numnodes=<int>) for use by the PowerPC architecture-
specific code that defines the maximum number of nodes that the kernel
will ever need in its current hardware environment. The boot code that
initializes nodes for PowerPC will read this value and use it to ensure
that all of the desired nodes are setup in the 'node_possible_map', and
elsewhere.
Signed-off-by: Michael Bringmann <mwb@...ux.vnet.ibm.com>
---
---
arch/powerpc/mm/numa.c | 31 +++++++++++++++++++++++++++++++
1 file changed, 31 insertions(+)
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 2b808a1..e6ee829 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -60,10 +60,27 @@
static int n_mem_addr_cells, n_mem_size_cells;
static int form1_affinity;
+#define TOPOLOGY_DEF_NUM_NODES 0
#define MAX_DISTANCE_REF_POINTS 4
static int distance_ref_points_depth;
static const __be32 *distance_ref_points;
static int distance_lookup_table[MAX_NUMNODES][MAX_DISTANCE_REF_POINTS];
+static int topology_num_nodes = TOPOLOGY_DEF_NUM_NODES;
+
+/*
+ * Topology-related early parameters
+ */
+static int __init early_num_nodes(char *p)
+{
+ if (!p)
+ return 1;
+
+ topology_num_nodes = memparse(p, &p);
+ dbg("topology num nodes = 0x%d\n", topology_num_nodes);
+
+ return 0;
+}
+early_param("numnodes", early_num_nodes);
/*
* Allocate node_to_cpumask_map based on number of available nodes
@@ -892,6 +909,18 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
NODE_DATA(nid)->node_spanned_pages = spanned_pages;
}
+static void __init setup_min_nodes(void)
+{
+ int i, l = topology_num_nodes;
+
+ for (i = 0; i < l; i++) {
+ if (!node_possible(i)) {
+ setup_node_data(i, 0, 0);
+ node_set(i, node_possible_map);
+ }
+ }
+}
+
void __init initmem_init(void)
{
int nid, cpu;
@@ -911,6 +940,8 @@ void __init initmem_init(void)
*/
nodes_and(node_possible_map, node_possible_map, node_online_map);
+ setup_min_nodes();
+
for_each_online_node(nid) {
unsigned long start_pfn, end_pfn;
Powered by blists - more mailing lists