lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1355440542.1823.21.camel@buesod1.americas.hpqcorp.net>
Date:	Thu, 13 Dec 2012 15:15:42 -0800
From:	Davidlohr Bueso <davidlohr.bueso@...com>
To:	Dave Hansen <dave@...ux.vnet.ibm.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] mm: add node physical memory range to sysfs

On Wed, 2012-12-12 at 20:49 -0800, Dave Hansen wrote:
> On 12/12/2012 06:03 PM, Davidlohr Bueso wrote:
> > On Wed, 2012-12-12 at 17:48 -0800, Dave Hansen wrote:
> >> But if we went and did it per-DIMM (showing which physical addresses and
> >> NUMA nodes a DIMM maps to), wouldn't that be redundant with this
> >> proposed interface?
> > 
> > If DIMMs overlap between nodes, then we wouldn't have an exact range for
> > a node in question. Having both approaches would complement each other.
> 
> How is that possible?  If NUMA nodes are defined by distances from CPUs
> to memory, how could a DIMM have more than a single distance to any
> given CPU?

Can't this occur when interleaving emulated nodes with physical ones?

> 
> >> How do you plan to use this in practice, btw?
> > 
> > It started because I needed to recognize the address of a node to remove
> > it from the e820 mappings and have the system "ignore" the node's
> > memory.
> 
> Actually, now that I think about it, can you check in the
> /sys/devices/system/ directories for memory and nodes?  We have linkages
> there for each memory section to every NUMA node, and you can also
> derive the physical address from the phys_index in each section.  That
> should allow you to work out physical addresses for a given node.
> 

I had looked at the memory-hotplug interface but found that this
'phys_index' doesn't include holes, while ->node_spanned_pages does.

Thanks,
Davidlohr

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ