[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1355361524.5255.9.camel@buesod1.americas.hpqcorp.net>
Date: Wed, 12 Dec 2012 17:18:44 -0800
From: Davidlohr Bueso <davidlohr.bueso@...com>
To: Dave Hansen <dave@...ux.vnet.ibm.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] mm: add node physical memory range to sysfs
On Fri, 2012-12-07 at 16:17 -0800, Dave Hansen wrote:
> On 12/07/2012 03:51 PM, Andrew Morton wrote:
> >> > +static ssize_t node_read_memrange(struct device *dev,
> >> > + struct device_attribute *attr, char *buf)
> >> > +{
> >> > + int nid = dev->id;
> >> > + unsigned long start_pfn = NODE_DATA(nid)->node_start_pfn;
> >> > + unsigned long end_pfn = start_pfn + NODE_DATA(nid)->node_spanned_pages;
> > hm. Is this correct for all for
> > FLATMEM/SPARSEMEM/SPARSEMEM_VMEMMAP/DISCONTIGME/etc?
>
> It's not _wrong_ per se, but it's not super precise, either.
>
> The problem is, it's quite valid to have these node_start/spanned ranges
> overlap between two or more nodes on some hardware. So, if the desired
> purpose is to map nodes to DIMMs, then this can only accomplish this on
> _some_ hardware, not all. It would be completely useless for that
> purpose for some configurations.
>
> Seems like the better way to do this would be to expose the DIMMs
> themselves in some way, and then map _those_ back to a node.
>
Good point, and from a DIMM perspective, I agree, and will look into
this. However, IMHO, having the range of physical addresses for every
node still provides valuable information, from a NUMA point of view. For
example, dealing with node related e820 mappings.
Andrew, with the documentation patch, would you be wiling to pickup a v2
of this?
Thanks,
Davidlohr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists