[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180619140818.GA16927@e107981-ln.cambridge.arm.com>
Date: Tue, 19 Jun 2018 15:08:26 +0100
From: Lorenzo Pieralisi <lorenzo.pieralisi@....com>
To: Punit Agrawal <punit.agrawal@....com>
Cc: Michal Hocko <mhocko@...nel.org>, Xie XiuQi <xiexiuqi@...wei.com>,
Hanjun Guo <guohanjun@...wei.com>,
Bjorn Helgaas <helgaas@...nel.org>,
tnowicki@...iumnetworks.com, linux-pci@...r.kernel.org,
Catalin Marinas <catalin.marinas@....com>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Will Deacon <will.deacon@....com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Jarkko Sakkinen <jarkko.sakkinen@...ux.intel.com>,
linux-mm@...ck.org, wanghuiqiang@...wei.com,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
zhongjiang <zhongjiang@...wei.com>,
linux-arm <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH 1/2] arm64: avoid alloc memory on offline node
On Tue, Jun 19, 2018 at 01:52:16PM +0100, Punit Agrawal wrote:
> Michal Hocko <mhocko@...nel.org> writes:
>
> > On Tue 19-06-18 20:03:07, Xie XiuQi wrote:
> > [...]
> >> I tested on a arm board with 128 cores 4 numa nodes, but I set CONFIG_NR_CPUS=72.
> >> Then node 3 is not be created, because node 3 has no memory, and no cpu.
> >> But some pci device may related to node 3, which be set in ACPI table.
> >
> > Could you double check that zonelists for node 3 are generated
> > correctly?
>
> The cpus in node 3 aren't onlined and there's no memory attached - I
> suspect that no zonelists are built for this node.
>
> We skip creating a node, if the number of SRAT entries parsed exceeds
> NR_CPUS[0]. This in turn prevents onlining the numa node and so no
> zonelists will be created for it.
>
> I think the problem will go away if the cpus are restricted via the
> kernel command line by setting nr_cpus.
>
> Xie, can you try the below patch on top of the one enabling memoryless
> nodes? I'm not sure this is the right solution but at least it'll
> confirm the problem.
This issue looks familiar (or at least related):
git log d3bd058826aa
The reason why the NR_CPUS guard is there is to avoid overflowing
the early_node_cpu_hwid array. IA64 does something different in
that respect compared to x86, we have to have a look into this.
Regardless, AFAICS the proximity domains to nodes mappings should not
depend on CONFIG_NR_CPUS, it seems that there is something wrong in that
in ARM64 ACPI SRAT parsing.
Lorenzo
>
> Thanks,
> Punit
>
> [0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/kernel/acpi_numa.c?h=v4.18-rc1#n73
>
> -- >8 --
> diff --git a/arch/arm64/kernel/acpi_numa.c b/arch/arm64/kernel/acpi_numa.c
> index d190a7b231bf..fea0f7164f1a 100644
> --- a/arch/arm64/kernel/acpi_numa.c
> +++ b/arch/arm64/kernel/acpi_numa.c
> @@ -70,11 +70,9 @@ void __init acpi_numa_gicc_affinity_init(struct acpi_srat_gicc_affinity *pa)
> if (!(pa->flags & ACPI_SRAT_GICC_ENABLED))
> return;
>
> - if (cpus_in_srat >= NR_CPUS) {
> + if (cpus_in_srat >= NR_CPUS)
> pr_warn_once("SRAT: cpu_to_node_map[%d] is too small, may not be able to use all cpus\n",
> NR_CPUS);
> - return;
> - }
>
> pxm = pa->proximity_domain;
> node = acpi_map_pxm_to_node(pxm);
Powered by blists - more mailing lists