lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140123061343.GB15206@redhat.com>
Date:	Thu, 23 Jan 2014 01:13:43 -0500
From:	Dave Jones <davej@...hat.com>
To:	David Rientjes <rientjes@...gle.com>
Cc:	Tang Chen <tangchen@...fujitsu.com>, tglx@...utronix.de,
	mingo@...hat.com, hpa@...or.com, akpm@...ux-foundation.org,
	zhangyanfei@...fujitsu.com, guz.fnst@...fujitsu.com,
	x86@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] numa, mem-hotplug: Fix stack overflow in numa when
 seting kernel nodes to unhotpluggable.

On Wed, Jan 22, 2014 at 10:06:14PM -0800, David Rientjes wrote:
 > On Thu, 23 Jan 2014, Tang Chen wrote:
 > 
 > > Dave found that the kernel will hang during boot. This is because
 > > the nodemask_t type stack variable numa_kernel_nodes is large enough
 > > to overflow the stack.
 > > 
 > > This doesn't always happen. According to Dave, this happened once
 > > in about five boots. The backtrace is like the following:
 > > 
 > > dump_stack
 > > panic
 > > ? numa_clear_kernel_node_hotplug
 > > __stack_chk_fail
 > > numa_clear_kernel_node_hotplug
 > > ? memblock_search_pfn_nid
 > > ? __early_pfn_to_nid
 > > numa_init
 > > x86_numa_init
 > > initmem_init
 > > setup_arch
 > > start_kernel
 > > 
 > > This patch fix this problem by defining numa_kernel_nodes as a
 > > static global variable in __initdata area.
 > > 
 > > Reported-by: Dave Jones <davej@...hat.com>
 > > Signed-off-by: Tang Chen <tangchen@...fujitsu.com>
 > > Tested-by: Gu Zheng <guz.fnst@...fujitsu.com>
 > 
 > I guess it depends on what Dave's CONFIG_NODES_SHIFT is?

It's 10, because I had MAXSMP set.

So, MAX_NUMNODES = 1 << 10

And the bitmask is made of longs. 1024 of them.

How does this work ?

	Dave

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ