[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170526132305.19ef3590@firefly.ozlabs.ibm.com>
Date: Fri, 26 May 2017 13:23:05 +1000
From: Balbir Singh <bsingharora@...il.com>
To: Michael Bringmann <mwb@...ux.vnet.ibm.com>
Cc: linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
Reza Arbab <arbab@...ux.vnet.ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Bharata B Rao <bharata@...ux.vnet.ib>,
Shailendra Singh <shailendras@...dia.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: Re: [PATCH V2 2/2]: powerpc/hotplug/mm: Fix hot-add memory node
assoc
On Thu, 25 May 2017 12:37:40 -0500
Michael Bringmann <mwb@...ux.vnet.ibm.com> wrote:
> Removing or adding memory via the PowerPC hotplug interface shows
> anomalies in the association between memory and nodes. The code
> was updated to ensure that all nodes found at boot are still available
> to subsequent DLPAR hotplug-memory operations, even if they are not
> needed at boot time.
>
> Signed-off-by: Michael Bringmann <mwb@...ux.vnet.ibm.com>
> ---
> Changes in V2:
> -- Simplify patches to ensure more nodes in possible map, removing
> code from PowerPC numa.c that constrained possible map to size
> of online map.
> ---
> arch/powerpc/mm/numa.c | 7 -------
> 1 file changed, 7 deletions(-)
>
> diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
> index 15c2dd5..18f3038 100644
> --- a/arch/powerpc/mm/numa.c
> +++ b/arch/powerpc/mm/numa.c
> @@ -907,13 +907,6 @@ void __init initmem_init(void)
>
> memblock_dump_all();
>
> - /*
> - * Reduce the possible NUMA nodes to the online NUMA nodes,
> - * since we do not support node hotplug. This ensures that we
> - * lower the maximum NUMA node ID to what is actually present.
> - */
> - nodes_and(node_possible_map, node_possible_map, node_online_map);
> -
There is an overhead with turning this off if you have too many cgroups
with the memory controller. I think this fix was added for a pathological
test case. On my system I see 84 cgroups with 1 node, so the probable
overhead is 84*255*sizeof(struct mem_cgroup_tree_per_node).
I tried some patches to reduce the overhead, but those need more overhauling
and rework.
Balbir Singh.
Powered by blists - more mailing lists