lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 28 Oct 2009 09:31:38 +0100
From:	Heiko Carstens <>
To:	David Rientjes <>
Cc:	Alex Chiang <>,
	Andrew Morton <>,
	Gary Hade <>,,, Badari Pulavarty <>,
	Martin Schwidefsky <>,
	Ingo Molnar <>
Subject: Re: [PATCH v2 1/5] mm: add numa node symlink for memory section in

On Tue, Oct 27, 2009 at 02:27:56PM -0700, David Rientjes wrote:
> On Tue, 27 Oct 2009, Alex Chiang wrote:
> > Thank you for ACKing, David.
> > 
> > S390 guys, I cc'ed you on this patch because I heard a rumour
> > that your memory sections may belong to more than one NUMA node?
> > Is that true? If so, how would you like me to handle that
> > situation?
> > 
> You're referring to how unregister_mem_sect_under_nodes() should be 
> handled, right?  register_mem_sect_under_node() already looks supported by 
> your patch.
> Since the unregister function includes a plural "nodes," I assume that 
> it's possible for hotplug to register a memory section to more than one 
> node.  That's probably lacking on x86 currently, however, because we lack 
> node hotplug.
> I'd suggest a similiar iteration through pfn's that the register function 
> does checking for multiple nodes and then removing the link from all 
> applicable node_devices kobj when unregistering.
> Maybe one of the s390 maintainers will test that?

The short answer is: s390 doesn't support NUMA, because the hardware doesn't
tell us to which node (book in s390 terms) a memory range belongs to.

Memory layout for a logical partition is striped: first x mbyte belong to
node 0, next x mbyte belong to node 1, etc...

Also, since there is always a hypervisor running below Linux I don't think
it would make too much sense if we would know to which node a piece of
memory belongs to: if the hypervisor decides to schedule a virtual cpu of
a logical partition to a different node then what?
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists