[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200708062217.GE386073@linux.ibm.com>
Date: Wed, 8 Jul 2020 09:22:17 +0300
From: Mike Rapoport <rppt@...ux.ibm.com>
To: Dan Williams <dan.j.williams@...el.com>
Cc: Justin He <Justin.He@....com>, Michal Hocko <mhocko@...nel.org>,
David Hildenbrand <david@...hat.com>,
Catalin Marinas <Catalin.Marinas@....com>,
Will Deacon <will@...nel.org>,
Vishal Verma <vishal.l.verma@...el.com>,
Dave Jiang <dave.jiang@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Baoquan He <bhe@...hat.com>,
Chuhong Yuan <hslester96@...il.com>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>,
Kaly Xin <Kaly.Xin@....com>
Subject: Re: [PATCH v2 1/3] arm64/numa: export memory_add_physaddr_to_nid as
EXPORT_SYMBOL_GPL
On Tue, Jul 07, 2020 at 09:27:43PM -0700, Dan Williams wrote:
> On Tue, Jul 7, 2020 at 9:08 PM Justin He <Justin.He@....com> wrote:
> [..]
> > > Especially for architectures that use memblock info for numa info
> > > (which seems to be everyone except x86) why not implement a generic
> > > memory_add_physaddr_to_nid() that does:
> > >
> > > int memory_add_physaddr_to_nid(u64 addr)
> > > {
> > > unsigned long start_pfn, end_pfn, pfn = PHYS_PFN(addr);
> > > int nid;
> > >
> > > for_each_online_node(nid) {
> > > get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
> > > if (pfn >= start_pfn && pfn <= end_pfn)
> > > return nid;
> > > }
> > > return NUMA_NO_NODE;
> > > }
> >
> > Thanks for your suggestion,
> > Could I wrap the codes and let memory_add_physaddr_to_nid simply invoke
> > phys_to_target_node()?
>
> I think it needs to be the reverse. phys_to_target_node() should call
> memory_add_physaddr_to_nid() by default, but fall back to searching
> reserved memory address ranges in memblock. See phys_to_target_node()
> in arch/x86/mm/numa.c. That one uses numa_meminfo instead of memblock,
> but the principle is the same i.e. that a target node may not be
> represented in memblock.memory, but memblock.reserved. I'm working on
> a patch to provide a function similar to get_pfn_range_for_nid() that
> operates on reserved memory.
Do we really need yet another memblock iterator?
I think only x86 has memory that is not in memblock.memory but only in
memblock.reserved.
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists