[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150910193819.GJ8114@mtj.duckdns.org>
Date: Thu, 10 Sep 2015 15:38:19 -0400
From: Tejun Heo <tj@...nel.org>
To: Tang Chen <tangchen@...fujitsu.com>
Cc: jiang.liu@...ux.intel.com, mika.j.penttila@...il.com,
mingo@...hat.com, akpm@...ux-foundation.org, rjw@...ysocki.net,
hpa@...or.com, yasu.isimatu@...il.com,
isimatu.yasuaki@...fujitsu.com, kamezawa.hiroyu@...fujitsu.com,
izumi.taku@...fujitsu.com, gongzhaogang@...pur.com,
qiaonuohan@...fujitsu.com, x86@...nel.org,
linux-acpi@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Gu Zheng <guz.fnst@...fujitsu.com>,
Christoph Lameter <cl@...ux-foundation.org>
Subject: Re: [PATCH v2 3/7] x86, gfp: Cache best near node for memory
allocation.
(cc'ing Christoph Lameter)
On Thu, Sep 10, 2015 at 03:29:35PM -0400, Tejun Heo wrote:
> Hello,
>
> On Thu, Sep 10, 2015 at 12:27:45PM +0800, Tang Chen wrote:
> > diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> > index ad35f30..1a1324f 100644
> > --- a/include/linux/gfp.h
> > +++ b/include/linux/gfp.h
> > @@ -307,13 +307,19 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask,
> > if (nid < 0)
> > nid = numa_node_id();
> >
> > + if (!node_online(nid))
> > + nid = get_near_online_node(nid);
> > +
> > return __alloc_pages(gfp_mask, order, node_zonelist(nid, gfp_mask));
> > }
>
> Why not just update node_data[]->node_zonelist in the first place?
> Also, what's the synchronization rule here? How are allocators
> synchronized against node hot [un]plugs?
Also, shouldn't kmalloc_node() or any public allocator fall back
automatically to a near node w/o GFP_THISNODE? Why is this failing at
all? I get that cpu id -> node id mapping changing messes up the
locality but allocations shouldn't fail, right?
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists