lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1163690509.5761.4.camel@localhost>
Date:	Thu, 16 Nov 2006 10:21:49 -0500
From:	Lee Schermerhorn <Lee.Schermerhorn@...com>
To:	Christoph Lameter <clameter@....com>
Cc:	Martin Bligh <mbligh@...igh.org>,
	Christian Krafft <krafft@...ibm.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [patch 2/2] enables booting a NUMA system where some nodes
	have no memory

On Wed, 2006-11-15 at 14:41 -0800, Christoph Lameter wrote:
> On Wed, 15 Nov 2006, Martin Bligh wrote:
> 
> > A node is an arbitrary container object containing one or more of:
> > 
> > CPUs
> > Memory
> > IO bus
> > 
> > It does not have to contain memory.
> 
> I have never seen a node on Linux without memory. I have seen nodes 
> without processors and without I/O but not without memory.This seems to be 
> something new?

I sent this out earlier in response to another message from Christoph
regarding nodes w/o memory.  Don't know if it made it...

>On Fri, 2006-11-10 at 10:16 -0800, Christoph Lameter wrote:
>> On Wed, 8 Nov 2006, KAMEZAWA Hiroyuki wrote:
>> 
>> > I wonder there are no code for creating NODE_DATA() for
device-only-node.
>> 
>> On IA64 we remap nodes with no memory / cpus to the nearest node
with 
>> memory. I think that is sufficient.

I don't think this happens anymore.  Back in the ~2.6.5 days, when we
would configure our numa platforms with 100% of memory interleaved [in
hardware at  cache line granularity], the cpus would move to the
interleaved "pseudo-node" and the memoryless nodes would be removed.
numactl --hardware would show something like this:

# uname -r
2.6.5-7.244-default
# numactl --hardware
available: 1 nodes (0-0)
node 0 size: 65443 MB
node 0 free: 64506 MB

I started seeing different behavior about the time SPARSEMEM went in.
Now, with a 2.6.16 base kernel [same platform, hardware interleaved
memory], I see:

# uname -r# numactl --hardware
available: 5 nodes (0-4)
node 0 size: 0 MB
node 0 free: 0 MB
node 1 size: 0 MB
node 1 free: 0 MB
node 2 size: 0 MB
node 2 free: 0 MB
node 3 size: 0 MB
node 3 free: 0 MB
node 4 size: 65439 MB
node 4 free: 64492 MB
node distances:
node   0   1   2   3   4
  0:  10  17  17  17  14
  1:  17  10  17  17  14
  2:  17  17  10  17  14
  3:  17  17  17  10  14
  4:  14  14  14  14  10
2.6.16.21-0.8-default

[Aside:  The firmware/SLIT says that the interleaved memory is closer to
all nodes that other nodes' memory.  This has interesting implications
for the "overflow" zone lists...]

Lee

> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@...ck.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ