lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 8 Feb 2016 12:09:27 +0100
From:	Ingo Molnar <mingo@...nel.org>
To:	PaX Team <pageexec@...email.hu>
Cc:	linux-tip-commits@...r.kernel.org, torvalds@...ux-foundation.org,
	izumi.taku@...fujitsu.com, linux-kernel@...r.kernel.org,
	spender@...ecurity.net, y14sg1@...cast.net,
	akpm@...ux-foundation.org, hpa@...or.com, tglx@...utronix.de,
	laijs@...fujitsu.com, tangchen@...fujitsu.com,
	isimatu.yasuaki@...fujitsu.com, wency@...fujitsu.com,
	zhangyanfei@...fujitsu.com, imtangchen@...il.com
Subject: Re: [tip:x86/mm] x86/mm/numa: Fix memory corruption on 32-bit NUMA
 kernels


* PaX Team <pageexec@...email.hu> wrote:

> On 8 Feb 2016 at 1:42, tip-bot for Ingo Molnar wrote:
> 
> > y14sg1 <y14sg1@...cast.net> reported that when running 32-bit NUMA kernels,
> > the grsecurity/PAX kernel patch flagged a size overflow in this function:
> > 
> >   PAX: size overflow detected in function x86_numa_init arch/x86/mm/numa.c:691 [...]
> > 
> > ... the reason for the overflow is that memblock_set_node() takes physical
> > addresses as arguments, while the start/end variables used by
> > numa_clear_kernel_node_hotplug() are 'unsigned long', which is 32-bit on PAE
> > kernels, but which has 64-bit physical addresses. So we truncate a 64-bit
> > physical range to 32 bits and pass it to memblock_set_node(), which corrupts
> > memory on systems with physical addresses above 4GB.
> 
> i think the truncated values go into memblock_clear_hotplug, not memblock_set_node.

Indeed.

> also the sideeffects are unclear to me, from a quick look these values seem to 
> be used to look up some range in memblock, i don't know if that can lead to 
> memory corruption per se or 'only' some logical bug later.

The truncated/bogus range is also used to (potentially) split up and extend the 
memblock array, so in theory it could cause other side effects as well. But you 
are right that it's not memory corruption per se, it 'only' feeds semi-random data 
to a historically fragile memory setup facility.

I'll fix the changelog.

Thanks,

	Ingo> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ