lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Fri, 31 May 2024 12:42:34 +0300
From: Mike Rapoport <rppt@...nel.org>
To: Dave Hansen <dave.hansen@...el.com>
Cc: Jan Beulich <jbeulich@...e.com>,
	Dave Hansen <dave.hansen@...ux.intel.com>,
	Andrew Lutomirski <luto@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	lkml <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] x86/NUMA: don't pass MAX_NUMNODES to memblock_set_node()

Hi Dave,

On Wed, May 29, 2024 at 09:08:12AM -0700, Dave Hansen wrote:
> On 5/29/24 09:00, Jan Beulich wrote:
> >> In other words, it's not completely clear why ff6c3d81f2e8 introduced
> >> this problem.
> > It is my understanding that said change, by preventing the NUMA
> > configuration from being rejected, resulted in different code paths to
> > be taken. The observed crash was somewhat later than the "No NUMA
> > configuration found" etc messages. Thus I don't really see a connection
> > between said change not having had any MAX_NUMNODES check and it having
> > introduced the (only perceived?) regression.
> 
> So your system has a bad NUMA config.  If it's rejected, then all is
> merry.  Something goes and writes over the nids in all of the memblocks
> to point to 0 (probably).
> 
> If it _isn't_ rejected, then it leaves a memblock in place that points
> to MAX_NUMNODES.  That MAX_NUMNODES is a ticking time bomb for later.
> 
> So this patch doesn't actually revert the rejection behavior change in
> the Fixes: commit.  It just makes the rest of the code more tolerant to
> _not_ rejecting the NUMA config?
 
It actually does. Before ff6c3d81f2e8 the NUMA coverage was verified
against numa_meminfo rather than memblock, so it could detect that only
small portion of the memory has node ID assigned.

With transition to memblock, the verification relies on node IDs set by the
arch code, but since memblock_validate_numa_coverage() only checked for
NUMA_NO_NODE is missed the ranges with nid == MAX_NUMNODES.

I took Jan's fix for memblock:

https://lore.kernel.org/all/1c8a058c-5365-4f27-a9f1-3aeb7fb3e7b2@suse.com

but I think that we should replace MAX_NUMNODES with NUMA_NO_NODE in calls to
memblock_set_node() in arch/x86.

-- 
Sincerely yours,
Mike.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ