[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170206153529.14614-2-richard.weiyang@gmail.com>
Date: Mon, 6 Feb 2017 23:35:29 +0800
From: Wei Yang <richard.weiyang@...il.com>
To: tglx@...utronix.de, mingo@...hat.com, hpa@...or.com, tj@...nel.org
Cc: linux-kernel@...r.kernel.org, Wei Yang <richard.weiyang@...il.com>
Subject: [PATCH 2/2] x86/mm/numa: remove the numa_nodemask_from_meminfo()
numa_nodemask_from_meminfo() is called to set bit according to
numa_meminfo. While the only two places for this call is used to set proper
bit to a copy of numa_nodes_parsed from numa_meminfo. With current code
path, those numa node information in numa_meminfo is a subset of
numa_nodes_parsed. So it is not necessary to set the bits again.
The following is a code path analysis to prove the numa node information in
numa_meminfo is a subset of numa_nodes_parsed.
x86_numa_init()
numa_init()
Case 1
acpi_numa_init()
acpi_parse_memory_affinity()
numa_add_memblk()
node_set(numa_nodes_parsed)
acpi_parse_slit()
numa_nodemask_from_meminfo()
Case 2
amd_numa_init()
numa_add_memblk()
node_set(numa_nodes_parsed)
Case 3
dummy_numa_init()
node_set(numa_nodes_parsed)
numa_add_memblk()
numa_register_memblks()
numa_nodemask_from_meminfo()
>From the code path analysis, we can see each time a memblk is added, the
proper bit is set in numa_nodes_parsed, which means it is not necessary to
set it again in numa_nodemask_from_meminfo() for a copy of
numa_nodes_parsed.
This patch removes numa_nodemask_from_meminfo().
Signed-off-by: Wei Yang <richard.weiyang@...il.com>
---
arch/x86/mm/numa.c | 21 +--------------------
1 file changed, 1 insertion(+), 20 deletions(-)
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index 3e9110b34147..4c9070507a59 100644
--- a/arch/x86/mm/numa.c
+++ b/arch/x86/mm/numa.c
@@ -314,20 +314,6 @@ int __init numa_cleanup_meminfo(struct numa_meminfo *mi)
return 0;
}
-/*
- * Set nodes, which have memory in @mi, in *@...emask.
- */
-static void __init numa_nodemask_from_meminfo(nodemask_t *nodemask,
- const struct numa_meminfo *mi)
-{
- int i;
-
- for (i = 0; i < ARRAY_SIZE(mi->blk); i++)
- if (mi->blk[i].start != mi->blk[i].end &&
- mi->blk[i].nid != NUMA_NO_NODE)
- node_set(mi->blk[i].nid, *nodemask);
-}
-
/**
* numa_reset_distance - Reset NUMA distance table
*
@@ -347,16 +333,12 @@ void __init numa_reset_distance(void)
static int __init numa_alloc_distance(void)
{
- nodemask_t nodes_parsed;
size_t size;
int i, j, cnt = 0;
u64 phys;
/* size the new table and allocate it */
- nodes_parsed = numa_nodes_parsed;
- numa_nodemask_from_meminfo(&nodes_parsed, &numa_meminfo);
-
- for_each_node_mask(i, nodes_parsed)
+ for_each_node_mask(i, numa_nodes_parsed)
cnt = i;
cnt++;
size = cnt * cnt * sizeof(numa_distance[0]);
@@ -535,7 +517,6 @@ static int __init numa_register_memblks(struct numa_meminfo *mi)
/* Account for nodes with cpus and no memory */
node_possible_map = numa_nodes_parsed;
- numa_nodemask_from_meminfo(&node_possible_map, mi);
if (WARN_ON(nodes_empty(node_possible_map)))
return -EINVAL;
--
2.11.0
Powered by blists - more mailing lists