lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 13 Dec 2012 16:18:47 -0800
From:	Dave Hansen <dave@...ux.vnet.ibm.com>
To:	Davidlohr Bueso <davidlohr.bueso@...com>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] mm: add node physical memory range to sysfs

On 12/13/2012 03:15 PM, Davidlohr Bueso wrote:
> On Wed, 2012-12-12 at 20:49 -0800, Dave Hansen wrote:
>> How is that possible?  If NUMA nodes are defined by distances from CPUs
>> to memory, how could a DIMM have more than a single distance to any
>> given CPU?
> 
> Can't this occur when interleaving emulated nodes with physical ones?

I'm glad you mentioned numa=fake. Its interleaving node configuration
would also make the patch you've proposed completely useless.  Let's say
you've got a two-node system with 16GB of RAM:

|        0        |      1      |

And you use numa=fake=1G, you'll get the interleaved like this:

|0|1|0|1|0|1|0|1|0|1|0|1|0|1|0|1|

The information that is exported from the interface you're proposing
would be:

node0: start_pfn=0  and spanned_pages = 15G
node1: start_pfn=1G and spanned_pages = 15G

In that situation, there is no way, to figure out which DIMM is backed
by a given node since the node ranges overlap.

>>>> How do you plan to use this in practice, btw?
>>>
>>> It started because I needed to recognize the address of a node to remove
>>> it from the e820 mappings and have the system "ignore" the node's
>>> memory.
>>
>> Actually, now that I think about it, can you check in the
>> /sys/devices/system/ directories for memory and nodes?  We have linkages
>> there for each memory section to every NUMA node, and you can also
>> derive the physical address from the phys_index in each section.  That
>> should allow you to work out physical addresses for a given node.
>> 
> I had looked at the memory-hotplug interface but found that this
> 'phys_index' doesn't include holes, while ->node_spanned_pages does.

I'm not sure what you mean.  Each memory section in sysfs accounts for
SECTION_SIZE where sections are 128MB by default on x86_64.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ