lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1452140425-16577-2-git-send-email-tangchen@cn.fujitsu.com>
Date:	Thu, 7 Jan 2016 12:20:21 +0800
From:	Tang Chen <tangchen@...fujitsu.com>
To:	<cl@...ux.com>, <tj@...nel.org>, <jiang.liu@...ux.intel.com>,
	<mika.j.penttila@...il.com>, <mingo@...hat.com>,
	<akpm@...ux-foundation.org>, <rjw@...ysocki.net>, <hpa@...or.com>,
	<yasu.isimatu@...il.com>, <isimatu.yasuaki@...fujitsu.com>,
	<kamezawa.hiroyu@...fujitsu.com>, <izumi.taku@...fujitsu.com>,
	<gongzhaogang@...pur.com>
CC:	<tangchen@...fujitsu.com>, <x86@...nel.org>,
	<linux-acpi@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
	<linux-mm@...ck.org>
Subject: [PATCH 1/5] x86, memhp, numa: Online memory-less nodes at boot time.

For now, x86 does not support memory-less node. A node without memory
will not be onlined, and the cpus on it will be mapped to the other
online nodes with memory in init_cpu_to_node(). The reason of doing this
is to ensure each cpu has mapped to a node with memory, so that it will
be able to allocate local memory for that cpu.

But we don't have to do it in this way.

In this series of patches, we are going to construct cpu <-> node mapping
for all possible cpus at boot time, which is a 1-1 mapping. It means the
cpu will be mapped to the node it belongs to, and will never be changed.
If a node has only cpus but no memory, the cpus on it will be mapped to
a memory-less node. And the memory-less node should be onlined.

This patch allocate pgdats for all memory-less nodes and online them at
boot time. As a result, when cpus on these memory-less nodes try to allocate 
memory from local node, it will automatically fall back to the proper zones 
in the zonelists.

Signed-off-by: Tang Chen <tangchen@...fujitsu.com>
---
 arch/x86/mm/numa.c     | 27 +++++++++++++--------------
 include/linux/mmzone.h |  1 +
 mm/page_alloc.c        |  2 +-
 3 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index c3b3f65..010edb4 100644
--- a/arch/x86/mm/numa.c
+++ b/arch/x86/mm/numa.c
@@ -704,22 +704,19 @@ void __init x86_numa_init(void)
 	numa_init(dummy_numa_init);
 }
 
-static __init int find_near_online_node(int node)
+static void __init init_memory_less_node(int nid)
 {
-	int n, val;
-	int min_val = INT_MAX;
-	int best_node = -1;
+	unsigned long zones_size[MAX_NR_ZONES] = {0};
+	unsigned long zholes_size[MAX_NR_ZONES] = {0};
 
-	for_each_online_node(n) {
-		val = node_distance(node, n);
+	/* Allocate and initialize node data. Memory-less node is now online.*/
+	alloc_node_data(nid);
+	free_area_init_node(nid, zones_size, 0, zholes_size);
 
-		if (val < min_val) {
-			min_val = val;
-			best_node = n;
-		}
-	}
-
-	return best_node;
+	/*
+	 * All zonelists will be built later in start_kernel() after per cpu
+	 * areas are initialized.
+	 */
 }
 
 /*
@@ -748,8 +745,10 @@ void __init init_cpu_to_node(void)
 
 		if (node == NUMA_NO_NODE)
 			continue;
+
 		if (!node_online(node))
-			node = find_near_online_node(node);
+			init_memory_less_node(node);
+
 		numa_set_node(cpu, node);
 	}
 }
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index e23a9e7..9c4d4d5 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -736,6 +736,7 @@ static inline bool is_dev_zone(const struct zone *zone)
 
 extern struct mutex zonelists_mutex;
 void build_all_zonelists(pg_data_t *pgdat, struct zone *zone);
+void build_zonelists(pg_data_t *pgdat);
 void wakeup_kswapd(struct zone *zone, int order, enum zone_type classzone_idx);
 bool zone_watermark_ok(struct zone *z, unsigned int order,
 		unsigned long mark, int classzone_idx, int alloc_flags);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9d666df..15c0358 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4145,7 +4145,7 @@ static void set_zonelist_order(void)
 		current_zonelist_order = user_zonelist_order;
 }
 
-static void build_zonelists(pg_data_t *pgdat)
+void build_zonelists(pg_data_t *pgdat)
 {
 	int j, node, load;
 	enum zone_type i;
-- 
1.9.3



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ