lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1551011649-30103-2-git-send-email-kernelfans@gmail.com>
Date:   Sun, 24 Feb 2019 20:34:04 +0800
From:   Pingfan Liu <kernelfans@...il.com>
To:     x86@...nel.org, linux-mm@...ck.org
Cc:     Pingfan Liu <kernelfans@...il.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        "H. Peter Anvin" <hpa@...or.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Mike Rapoport <rppt@...ux.vnet.ibm.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...e.de>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Andy Lutomirski <luto@...nel.org>,
        Andi Kleen <ak@...ux.intel.com>,
        Petr Tesarik <ptesarik@...e.cz>,
        Michal Hocko <mhocko@...e.com>,
        Stephen Rothwell <sfr@...b.auug.org.au>,
        Jonathan Corbet <corbet@....net>,
        Nicholas Piggin <npiggin@...il.com>,
        Daniel Vacek <neelx@...hat.com>, linux-kernel@...r.kernel.org
Subject: [PATCH 1/6] mm/numa: extract the code of building node fall back list

In coming patch, memblock allocator also utilizes node fall back list info.
Hence extracting the related code for reusing.

Signed-off-by: Pingfan Liu <kernelfans@...il.com>
CC: Thomas Gleixner <tglx@...utronix.de>
CC: Ingo Molnar <mingo@...hat.com>
CC: Borislav Petkov <bp@...en8.de>
CC: "H. Peter Anvin" <hpa@...or.com>
CC: Dave Hansen <dave.hansen@...ux.intel.com>
CC: Vlastimil Babka <vbabka@...e.cz>
CC: Mike Rapoport <rppt@...ux.vnet.ibm.com>
CC: Andrew Morton <akpm@...ux-foundation.org>
CC: Mel Gorman <mgorman@...e.de>
CC: Joonsoo Kim <iamjoonsoo.kim@....com>
CC: Andy Lutomirski <luto@...nel.org>
CC: Andi Kleen <ak@...ux.intel.com>
CC: Petr Tesarik <ptesarik@...e.cz>
CC: Michal Hocko <mhocko@...e.com>
CC: Stephen Rothwell <sfr@...b.auug.org.au>
CC: Jonathan Corbet <corbet@....net>
CC: Nicholas Piggin <npiggin@...il.com>
CC: Daniel Vacek <neelx@...hat.com>
CC: linux-kernel@...r.kernel.org
---
 mm/page_alloc.c | 48 +++++++++++++++++++++++++++++-------------------
 1 file changed, 29 insertions(+), 19 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 35fdde0..a6967a1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5380,6 +5380,32 @@ static void build_thisnode_zonelists(pg_data_t *pgdat)
 	zonerefs->zone_idx = 0;
 }
 
+int build_node_order(int *node_oder_array, int sz,
+	int local_node, nodemask_t *used_mask)
+{
+	int node, nr_nodes = 0;
+	int prev_node = local_node;
+	int load = nr_online_nodes;
+
+
+	while ((node = find_next_best_node(local_node, used_mask)) >= 0
+		&& nr_nodes < sz) {
+		/*
+		 * We don't want to pressure a particular node.
+		 * So adding penalty to the first node in same
+		 * distance group to make it round-robin.
+		 */
+		if (node_distance(local_node, node) !=
+		    node_distance(local_node, prev_node))
+			node_load[node] = load;
+
+		node_oder_array[nr_nodes++] = node;
+		prev_node = node;
+		load--;
+	}
+	return nr_nodes;
+}
+
 /*
  * Build zonelists ordered by zone and nodes within zones.
  * This results in conserving DMA zone[s] until all Normal memory is
@@ -5390,32 +5416,16 @@ static void build_thisnode_zonelists(pg_data_t *pgdat)
 static void build_zonelists(pg_data_t *pgdat)
 {
 	static int node_order[MAX_NUMNODES];
-	int node, load, nr_nodes = 0;
+	int local_node, nr_nodes = 0;
 	nodemask_t used_mask;
-	int local_node, prev_node;
 
 	/* NUMA-aware ordering of nodes */
 	local_node = pgdat->node_id;
-	load = nr_online_nodes;
-	prev_node = local_node;
 	nodes_clear(used_mask);
 
 	memset(node_order, 0, sizeof(node_order));
-	while ((node = find_next_best_node(local_node, &used_mask)) >= 0) {
-		/*
-		 * We don't want to pressure a particular node.
-		 * So adding penalty to the first node in same
-		 * distance group to make it round-robin.
-		 */
-		if (node_distance(local_node, node) !=
-		    node_distance(local_node, prev_node))
-			node_load[node] = load;
-
-		node_order[nr_nodes++] = node;
-		prev_node = node;
-		load--;
-	}
-
+	nr_nodes = build_node_order(node_order, MAX_NUMNODES,
+		local_node, &used_mask);
 	build_zonelists_in_node_order(pgdat, node_order, nr_nodes);
 	build_thisnode_zonelists(pgdat);
 }
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ