[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200701084200.GN2369@dhcp22.suse.cz>
Date: Wed, 1 Jul 2020 10:42:00 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linuxppc-dev@...ts.ozlabs.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Mel Gorman <mgorman@...e.de>,
Vlastimil Babka <vbabka@...e.cz>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Christopher Lameter <cl@...ux.com>,
Michael Ellerman <mpe@...erman.id.au>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Gautham R Shenoy <ego@...ux.vnet.ibm.com>,
Satheesh Rajendran <sathnaga@...ux.vnet.ibm.com>,
David Hildenbrand <david@...hat.com>
Subject: Re: [PATCH v5 3/3] mm/page_alloc: Keep memoryless cpuless node 0
offline
On Wed 24-06-20 14:58:46, Srikar Dronamraju wrote:
> Currently Linux kernel with CONFIG_NUMA on a system with multiple
> possible nodes, marks node 0 as online at boot. However in practice,
> there are systems which have node 0 as memoryless and cpuless.
>
> This can cause numa_balancing to be enabled on systems with only one node
> with memory and CPUs. The existence of this dummy node which is cpuless and
> memoryless node can confuse users/scripts looking at output of lscpu /
> numactl.
>
> By marking, N_ONLINE as NODE_MASK_NONE, lets stop assuming that Node 0 is
> always online.
>
> v5.8-rc2
> available: 2 nodes (0,2)
> node 0 cpus:
> node 0 size: 0 MB
> node 0 free: 0 MB
> node 2 cpus: 0 1 2 3 4 5 6 7
> node 2 size: 32625 MB
> node 2 free: 31490 MB
> node distances:
> node 0 2
> 0: 10 20
> 2: 20 10
>
> proc and sys files
> ------------------
> /sys/devices/system/node/online: 0,2
> /proc/sys/kernel/numa_balancing: 1
> /sys/devices/system/node/has_cpu: 2
> /sys/devices/system/node/has_memory: 2
> /sys/devices/system/node/has_normal_memory: 2
> /sys/devices/system/node/possible: 0-31
>
> v5.8-rc2 + patch
> ------------------
> available: 1 nodes (2)
> node 2 cpus: 0 1 2 3 4 5 6 7
> node 2 size: 32625 MB
> node 2 free: 31487 MB
> node distances:
> node 2
> 2: 10
>
> proc and sys files
> ------------------
> /sys/devices/system/node/online: 2
> /proc/sys/kernel/numa_balancing: 0
> /sys/devices/system/node/has_cpu: 2
> /sys/devices/system/node/has_memory: 2
> /sys/devices/system/node/has_normal_memory: 2
> /sys/devices/system/node/possible: 0-31
>
> Note: On Powerpc, cpu_to_node of possible but not present cpus would
> previously return 0. Hence this commit depends on commit ("powerpc/numa: Set
> numa_node for all possible cpus") and commit ("powerpc/numa: Prefer node id
> queried from vphn"). Without the 2 commits, Powerpc system might crash.
>
> 1. User space applications like Numactl, lscpu, that parse the sysfs tend to
> believe there is an extra online node. This tends to confuse users and
> applications. Other user space applications start believing that system was
> not able to use all the resources (i.e missing resources) or the system was
> not setup correctly.
>
> 2. Also existence of dummy node also leads to inconsistent information. The
> number of online nodes is inconsistent with the information in the
> device-tree and resource-dump
>
> 3. When the dummy node is present, single node non-Numa systems end up showing
> up as NUMA systems and numa_balancing gets enabled. This will mean we take
> the hit from the unnecessary numa hinting faults.
I have to say that I dislike the node online/offline state and directly
exporting that to the userspace. Users should only care whether the node
has memory/cpus. Numa nodes can be online without any memory. Just
offline all the present memory blocks but do not physically hot remove
them and you are in the same situation. If users are confused by an
output of tools like numactl -H then those could be updated and hide
nodes without any memory&cpus.
The autonuma problem sounds interesting but again this patch doesn't
really solve the underlying problem because I strongly suspect that the
problem is still there when a numa node gets all its memory offline as
mentioned above.
While I completely agree that making node 0 special is wrong, I have
still hard time to review this very simply looking patch because all the
numa initialization is so spread around that this might just blow up
at unexpected places. IIRC we have discussed testing in the previous
version and David has provided a way to emulate these configurations
on x86. Did you manage to use those instruction for additional testing
on other than ppc architectures?
> Cc: linuxppc-dev@...ts.ozlabs.org
> Cc: linux-mm@...ck.org
> Cc: linux-kernel@...r.kernel.org
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: Mel Gorman <mgorman@...e.de>
> Cc: Vlastimil Babka <vbabka@...e.cz>
> Cc: "Kirill A. Shutemov" <kirill@...temov.name>
> Cc: Christopher Lameter <cl@...ux.com>
> Cc: Michael Ellerman <mpe@...erman.id.au>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Linus Torvalds <torvalds@...ux-foundation.org>
> Cc: Gautham R Shenoy <ego@...ux.vnet.ibm.com>
> Cc: Satheesh Rajendran <sathnaga@...ux.vnet.ibm.com>
> Cc: David Hildenbrand <david@...hat.com>
> Signed-off-by: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
> ---
> Changelog v4:->v5:
> - rebased to v5.8-rc2
> link v4: http://lore.kernel.org/lkml/20200512132937.19295-1-srikar@linux.vnet.ibm.com/t/#u
>
> Changelog v1:->v2:
> - Rebased to v5.7-rc3
> Link v2: https://lore.kernel.org/linuxppc-dev/20200428093836.27190-1-srikar@linux.vnet.ibm.com/t/#u
>
> mm/page_alloc.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 48eb0f1410d4..5187664558e1 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -117,8 +117,10 @@ EXPORT_SYMBOL(latent_entropy);
> */
> nodemask_t node_states[NR_NODE_STATES] __read_mostly = {
> [N_POSSIBLE] = NODE_MASK_ALL,
> +#ifdef CONFIG_NUMA
> + [N_ONLINE] = NODE_MASK_NONE,
> +#else
> [N_ONLINE] = { { [0] = 1UL } },
> -#ifndef CONFIG_NUMA
> [N_NORMAL_MEMORY] = { { [0] = 1UL } },
> #ifdef CONFIG_HIGHMEM
> [N_HIGH_MEMORY] = { { [0] = 1UL } },
> --
> 2.18.1
>
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists