lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4DBCACAA.2080902@kernel.org>
Date:	Sat, 30 Apr 2011 17:43:22 -0700
From:	Yinghai Lu <yinghai@...nel.org>
To:	Tejun Heo <tj@...nel.org>
CC:	mingo@...hat.com, rientjes@...gle.com, tglx@...utronix.de,
	hpa@...or.com, x86@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86, NUMA: Fix empty memblk detection in numa_cleanup_meminfo()

On 04/30/2011 05:33 AM, Tejun Heo wrote:
> From: Yinghai Lu <yinghai@...nel.org>
> 
> numa_cleanup_meminfo() trims each memblk between low (0) and high
> (max_pfn) limits and discard empty ones.  However, the emptiness
> detection incorrectly used equality test.  If the start of a memblk is
> higher than max_pfn, it is empty but fails the equality test and
> doesn't get discarded.
> 
> Fix it by using >= instead of ==.
> 
> Signed-off-by: Yinghai Lu <yinghai@...nel.org>
> Signed-off-by: Tejun Heo <tj@...nel.org>
> ---
> So, something like this.  Does this fix the problem you see?
> 
> Thanks.
> 
>  arch/x86/mm/numa_64.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> Index: work/arch/x86/mm/numa.c
> ===================================================================
> --- work.orig/arch/x86/mm/numa.c
> +++ work/arch/x86/mm/numa.c
> @@ -191,7 +191,7 @@ int __init numa_cleanup_meminfo(struct n
>  		bi->end = min(bi->end, high);
>  
>  		/* and there's no empty block */
> -		if (bi->start == bi->end) {
> +		if (bi->start >= bi->end) {
>  			numa_remove_memblk_from(i--, mi);
>  			continue;
>  		}
this one works too
but print out is some strange
on 512g system got:

SRAT: Node 0 PXM 0 0-a0000
SRAT: Node 0 PXM 0 100000-80000000
SRAT: Node 0 PXM 0 100000000-1080000000
SRAT: Node 1 PXM 1 1080000000-2080000000
SRAT: Node 2 PXM 2 2080000000-3080000000
SRAT: Node 3 PXM 3 3080000000-4080000000
SRAT: Node 4 PXM 4 4080000000-5080000000
SRAT: Node 5 PXM 5 5080000000-6080000000
SRAT: Node 6 PXM 6 6080000000-7080000000
SRAT: Node 7 PXM 7 7080000000-8080000000
NUMA: Initialized distance table, cnt=8
NUMA: Node 0 [0,a0000) + [100000,80000000) -> [0,80000000)
NUMA: Node 0 [0,80000000) + [100000000,1080000000) -> [0,1000000000)


first patch on 512g system got 
NUMA: Node 0 [0,a0000) + [100000,80000000) -> [0,80000000)
NUMA: Node 0 [0,80000000) + [100000000,1000000000) -> [0,1000000000)

still thinking first one is more clean.

Thanks

Yinghai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ