lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 20 Aug 2012 15:06:48 +0800
From:	Wen Congyang <wency@...fujitsu.com>
To:	wujianguo <wujianguo106@...il.com>
CC:	tony.luck@...el.com, fenghua.yu@...el.com,
	linux-ia64@...r.kernel.org, linux-kernel@...r.kernel.org,
	jiang.liu@...wei.com, guohanjun@...wei.com, qiuxishi@...wei.com,
	liuj97@...il.com
Subject: Re: [PATCH]mm/ia64: fix a node distance bug

At 08/20/2012 02:21 PM, wujianguo Wrote:
> From: Jianguo Wu <wujianguo@...wei.com>
> 
> Hi all,
> 	When doing memory-hot-plug, We found node distance is wrong after offline
> a node in IA64 platform. For example system has 4 nodes:
> node distances:
> node   0   1   2   3
>   0:  10  21  21  32
>   1:  21  10  32  21
>   2:  21  32  10  21
>   3:  32  21  21  10
> 
> linux-drf:/sys/devices/system/node/node0 # cat distance
> 10  21  21  32
> linux-drf:/sys/devices/system/node/node1 # cat distance
> 21  10  32  21
> 
> After offline node2:
> linux-drf:/sys/devices/system/node/node0 # cat distance
> 10 21 32
> linux-drf:/sys/devices/system/node/node1 # cat distance
> 32 21 32	--------->expected value is: 21  10  21
> 
> In arch IA, we have following definition:
> extern u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES];
> #define node_distance(from,to) (numa_slit[(from) * num_online_nodes() + (to)])
> 
> node distance is setup as following:
> acpi_numa_arch_fixup()
> {
> 	...
> 	memset(numa_slit, -1, sizeof(numa_slit));
> 	for (i = 0; i < slit_table->locality_count; i++) {
> 		if (!pxm_bit_test(i))
> 			continue;
> 		node_from = pxm_to_node(i);
> 		for (j = 0; j < slit_table->locality_count; j++) {
> 			if (!pxm_bit_test(j))
> 				continue;
> 			node_to = pxm_to_node(j);
> 			node_distance(node_from, node_to) =
> 			    slit_table->entry[i * slit_table->locality_count + j];
> 		}
> 	}
> 	...
> }
> 	num_online_nodes() is a variable value, during system boot the return vale is 4,
> but after offline node2, the return value is 3, so we read a wrong node distance value.
> This patch is trying to fix this bug.
> 
> Signed-off-by: Jianguo Wu <wujianguo@...wei.com>
> ---
>  arch/ia64/include/asm/numa.h |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/ia64/include/asm/numa.h b/arch/ia64/include/asm/numa.h
> index 6a8a27c..2e27ef1 100644
> --- a/arch/ia64/include/asm/numa.h
> +++ b/arch/ia64/include/asm/numa.h
> @@ -59,7 +59,7 @@ extern struct node_cpuid_s node_cpuid[NR_CPUS];
>   */
> 
>  extern u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES];
> -#define node_distance(from,to) (numa_slit[(from) * num_online_nodes() + (to)])
> +#define node_distance(from,to) (numa_slit[(from) * MAX_NUMNODES + (to)])

Hmm, MAX_NUMNODES is too large. I think num_possible_nodes() is better.

Thanks
Wen Congyang

> 
>  extern int paddr_to_nid(unsigned long paddr);
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ