[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210510102107.GR2633526@linux.vnet.ibm.com>
Date: Mon, 10 May 2021 15:51:07 +0530
From: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
To: Laurent Dufour <ldufour@...ux.ibm.com>
Cc: mpe@...erman.id.au, benh@...nel.crashing.org, paulus@...ba.org,
nathanl@...ux.ibm.com, linuxppc-dev@...ts.ozlabs.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] ppc64/numa: consider the max numa node for migratable
LPAR
* Laurent Dufour <ldufour@...ux.ibm.com> [2021-04-29 20:19:01]:
> When a LPAR is migratable, we should consider the maximum possible NUMA
> node instead the number of NUMA node from the actual system.
>
> The DT property 'ibm,current-associativity-domains' is defining the maximum
> number of nodes the LPAR can see when running on that box. But if the LPAR
> is being migrated on another box, it may seen up to the nodes defined by
> 'ibm,max-associativity-domains'. So if a LPAR is migratable, that value
> should be used.
>
> Unfortunately, there is no easy way to know if a LPAR is migratable or
> not. The hypervisor is exporting the property 'ibm,migratable-partition' in
> the case it set to migrate partition, but that would not mean that the
> current partition is migratable.
>
> Without that patch, when a LPAR is started on a 2 nodes box and then
> migrated to a 3 nodes box, the hypervisor may spread the LPAR's CPUs on the
> 3rd node. In that case if a CPU from that 3rd node is added to the LPAR, it
> will be wrongly assigned to the node because the kernel has been set to use
> up to 2 nodes (the configuration of the departure node). With that patch
> applies, the CPU is correctly added to the 3rd node.
You probably meant, "With this patch applied"
Also you may want to add a fixes tag:
> Cc: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
> Signed-off-by: Laurent Dufour <ldufour@...ux.ibm.com>
> ---
> arch/powerpc/mm/numa.c | 14 +++++++++++---
> 1 file changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
> index f2bf98bdcea2..673fa6e47850 100644
> --- a/arch/powerpc/mm/numa.c
> +++ b/arch/powerpc/mm/numa.c
> @@ -893,7 +893,7 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
> static void __init find_possible_nodes(void)
> {
> struct device_node *rtas;
> - const __be32 *domains;
> + const __be32 *domains = NULL;
> int prop_length, max_nodes;
> u32 i;
>
> @@ -909,9 +909,14 @@ static void __init find_possible_nodes(void)
> * it doesn't exist, then fallback on ibm,max-associativity-domains.
> * Current denotes what the platform can support compared to max
> * which denotes what the Hypervisor can support.
> + *
> + * If the LPAR is migratable, new nodes might be activated after a LPM,
> + * so we should consider the max number in that case.
> */
> - domains = of_get_property(rtas, "ibm,current-associativity-domains",
> - &prop_length);
> + if (!of_get_property(of_root, "ibm,migratable-partition", NULL))
> + domains = of_get_property(rtas,
> + "ibm,current-associativity-domains",
> + &prop_length);
> if (!domains) {
> domains = of_get_property(rtas, "ibm,max-associativity-domains",
> &prop_length);
> @@ -920,6 +925,9 @@ static void __init find_possible_nodes(void)
> }
>
> max_nodes = of_read_number(&domains[min_common_depth], 1);
> + printk(KERN_INFO "Partition configured for %d NUMA nodes.\n",
> + max_nodes);
> +
Another nit:
you may want to make this pr_info instead of printk
> for (i = 0; i < max_nodes; i++) {
> if (!node_possible(i))
> node_set(i, node_possible_map);
> --
> 2.31.1
>
Otherwise looks good to me.
Reviewed-by: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
--
Thanks and Regards
Srikar Dronamraju
Powered by blists - more mailing lists