[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50C95E4A.9010509@linux.vnet.ibm.com>
Date: Wed, 12 Dec 2012 20:49:14 -0800
From: Dave Hansen <dave@...ux.vnet.ibm.com>
To: Davidlohr Bueso <davidlohr.bueso@...com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] mm: add node physical memory range to sysfs
On 12/12/2012 06:03 PM, Davidlohr Bueso wrote:
> On Wed, 2012-12-12 at 17:48 -0800, Dave Hansen wrote:
>> But if we went and did it per-DIMM (showing which physical addresses and
>> NUMA nodes a DIMM maps to), wouldn't that be redundant with this
>> proposed interface?
>
> If DIMMs overlap between nodes, then we wouldn't have an exact range for
> a node in question. Having both approaches would complement each other.
How is that possible? If NUMA nodes are defined by distances from CPUs
to memory, how could a DIMM have more than a single distance to any
given CPU?
>> How do you plan to use this in practice, btw?
>
> It started because I needed to recognize the address of a node to remove
> it from the e820 mappings and have the system "ignore" the node's
> memory.
Actually, now that I think about it, can you check in the
/sys/devices/system/ directories for memory and nodes? We have linkages
there for each memory section to every NUMA node, and you can also
derive the physical address from the phys_index in each section. That
should allow you to work out physical addresses for a given node.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists