lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9d02405a-793e-bcf5-a424-470d9c82ec7d@csgroup.eu>
Date:   Sat, 2 Apr 2022 18:35:15 +0200
From:   Christophe Leroy <christophe.leroy@...roup.eu>
To:     Laurent Dufour <ldufour@...ux.ibm.com>,
        linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Cc:     nathanl@...ux.ibm.com, cheloha@...ux.ibm.com
Subject: Re: [PATCH v2] powerpc/drmem: Don't compute the NUMA node for each
 LMB



Le 05/08/2020 à 15:35, Laurent Dufour a écrit :
> All the LMB from the same set of ibm,dynamic-memory-v2 property are
> sharing the same NUMA node. Don't compute that node for each one.
> 
> Tested on a system with 1022 LMBs spread on 4 NUMA nodes, only 4 calls to
> lmb_set_nid() have been made instead of 1022.
> 
> This should prevent some soft lockups when starting large guests
> 
> Code has meaning only if CONFIG_MEMORY_HOTPLUG is set, otherwise the nid
> field is not present in the drmem_lmb structure.
> 
> Signed-off-by: Laurent Dufour <ldufour@...ux.ibm.com>

It looks like this patch was superseded by e5e179aa3a39 ("pseries/drmem: 
don't cache node id in drmem_lmb struct").

If not, anyway it conflicts with that patch so it has to be rebased.

Thanks
Christophe


> ---
>   arch/powerpc/mm/drmem.c | 25 ++++++++++++++++++++++++-
>   1 file changed, 24 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/mm/drmem.c b/arch/powerpc/mm/drmem.c
> index b2eeea39684c..c11b6ec99ea3 100644
> --- a/arch/powerpc/mm/drmem.c
> +++ b/arch/powerpc/mm/drmem.c
> @@ -402,6 +402,9 @@ static void __init init_drmem_v2_lmbs(const __be32 *prop)
>   	const __be32 *p;
>   	u32 i, j, lmb_sets;
>   	int lmb_index;
> +#ifdef CONFIG_MEMORY_HOTPLUG
> +	struct drmem_lmb *first = NULL;
> +#endif
>   
>   	lmb_sets = of_read_number(prop++, 1);
>   	if (lmb_sets == 0)
> @@ -426,6 +429,15 @@ static void __init init_drmem_v2_lmbs(const __be32 *prop)
>   	for (i = 0; i < lmb_sets; i++) {
>   		read_drconf_v2_cell(&dr_cell, &p);
>   
> +#ifdef CONFIG_MEMORY_HOTPLUG
> +		/*
> +		 * Fetch the NUMA node id for the fist set or if the
> +		 * associativity index is different from the previous set.
> +		 */
> +		if (first && dr_cell.aa_index != first->aa_index)
> +			first = NULL;
> +#endif
> +
>   		for (j = 0; j < dr_cell.seq_lmbs; j++) {
>   			lmb = &drmem_info->lmbs[lmb_index++];
>   
> @@ -438,7 +450,18 @@ static void __init init_drmem_v2_lmbs(const __be32 *prop)
>   			lmb->aa_index = dr_cell.aa_index;
>   			lmb->flags = dr_cell.flags;
>   
> -			lmb_set_nid(lmb);
> +#ifdef CONFIG_MEMORY_HOTPLUG
> +			/*
> +			 * All the LMB in the set share the same NUMA
> +			 * associativity property. So read that node only once.
> +			 */
> +			if (!first) {
> +				lmb_set_nid(lmb);
> +				first = lmb;
> +			} else {
> +				lmb->nid = first->nid;
> +			}
> +#endif
>   		}
>   	}
>   }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ