lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CA+5PVA5M=NK2VrCL+FaVruCeT7BDzx-+gXGXERpFeonPLKOaHg@mail.gmail.com>
Date:	Mon, 3 Feb 2014 19:55:38 -0500
From:	Josh Boyer <jwboyer@...oraproject.org>
To:	Dave Jones <davej@...hat.com>, Tang Chen <tangchen@...fujitsu.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	zhangyanfei@...fujitsu.com, guz.fnst@...fujitsu.com,
	x86 <x86@...nel.org>,
	"Linux-Kernel@...r. Kernel. Org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] numa, mem-hotplug: Fix array index overflow when
 synchronizing nid to memblock.reserved.

On Tue, Jan 28, 2014 at 10:24 AM, Dave Jones <davej@...hat.com> wrote:
> On Tue, Jan 28, 2014 at 05:05:16PM +0800, Tang Chen wrote:
>  > The following path will cause array out of bound.
>  >
>  > memblock_add_region() will always set nid in memblock.reserved to MAX_NUMNODES.
>  > In numa_register_memblks(), after we set all nid to correct valus in memblock.reserved,
>  > we called setup_node_data(), and used memblock_alloc_nid() to allocate memory, with
>  > nid set to MAX_NUMNODES.
>  >
>  > The nodemask_t type can be seen as a bit array. And the index is 0 ~ MAX_NUMNODES-1.
>  >
>  > After that, when we call node_set() in numa_clear_kernel_node_hotplug(), the nodemask_t
>  > got an index of value MAX_NUMNODES, which is out of [0 ~ MAX_NUMNODES-1].
>  >
>  > See below:
>  >
>  > numa_init()
>  >  |---> numa_register_memblks()
>  >  |      |---> memblock_set_node(memory)              set correct nid in memblock.memory
>  >  |      |---> memblock_set_node(reserved)    set correct nid in memblock.reserved
>  >  |      |......
>  >  |      |---> setup_node_data()
>  >  |             |---> memblock_alloc_nid()    here, nid is set to MAX_NUMNODES (1024)
>  >  |......
>  >  |---> numa_clear_kernel_node_hotplug()
>  >         |---> node_set()                     here, we have an index 1024, and overflowed
>  >
>  > This patch moves nid setting to numa_clear_kernel_node_hotplug() to fix this problem.
>  >
>  > Reported-by: Dave Jones <davej@...hat.com>
>  > Signed-off-by: Tang Chen <tangchen@...fujitsu.com>
>  > Tested-by: Gu Zheng <guz.fnst@...fujitsu.com>
>  > ---
>  >  arch/x86/mm/numa.c | 19 +++++++++++--------
>  >  1 file changed, 11 insertions(+), 8 deletions(-)
>
> This does seem to solve the problem (In conjunction with David's variant of the other patch).

Is this (and the first in the series) going to land in Linus' tree
soon?  I don't see them in -rc1 and people are still hitting the early
oops Dave did without this.

josh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ