lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 29 May 2024 18:00:58 +0200
From: Jan Beulich <jbeulich@...e.com>
To: Dave Hansen <dave.hansen@...el.com>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>,
 Andrew Lutomirski <luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
 lkml <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] x86/NUMA: don't pass MAX_NUMNODES to memblock_set_node()

On 29.05.2024 17:36, Dave Hansen wrote:
> On 5/29/24 00:42, Jan Beulich wrote:
>> On an (old) x86 system with SRAT just covering space above 4Gb:
>>
>>     ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0xfffffffff] hotplug
> 
> OK, so you've got a system with buggy NUMA information.  It _used_ to
> "refuse" the NUMA configuration.  Now it tries to move forward and
> eventually does a NULL deref in memmap_init().
> 
> Right?

Yes.

>> the commit referenced below leads to this NUMA configuration no longer
>> being refused by a CONFIG_NUMA=y kernel (previously
>>
>>     NUMA: nodes only cover 6144MB of your 8185MB e820 RAM. Not used.
>>     No NUMA configuration found
>>     Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
>>
>> was seen in the log directly after the message quoted above), because of
>> memblock_validate_numa_coverage() checking for NUMA_NO_NODE (only). This
>> in turn led to memblock_alloc_range_nid()'s warning about MAX_NUMNODES
>> triggering, followed by a NULL deref in memmap_init() when trying to
>> access node 64's (NODE_SHIFT=6) node data.
> 
> This is a really oblique way of saying:
> 
> 	... followed by a NULL deref in memmap_init() of
> 	NODE_DATA(MAX_NUMNODES).
> 
>> To compensate said change, avoid passing MAX_NUMNODES to
>> memblock_set_node(). In turn numa_clear_kernel_node_hotplug()'s check
>> then also needs adjusting.
>>
>> Fixes: ff6c3d81f2e8 ("NUMA: optimize detection of memory with no node id assigned by firmware")
> 
> I was expecting to see MAX_NUMNODES checks in ff6c3d81f2e8 somewhere.
> But I don't see any in the numa_meminfo_cover_memory() or
> __absent_pages_in_range().
> 
> In other words, it's not completely clear why ff6c3d81f2e8 introduced
> this problem.

It is my understanding that said change, by preventing the NUMA
configuration from being rejected, resulted in different code paths to
be taken. The observed crash was somewhat later than the "No NUMA
configuration found" etc messages. Thus I don't really see a connection
between said change not having had any MAX_NUMNODES check and it having
introduced the (only perceived?) regression.

Jan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ