[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1509101701220.11096@east.gentwo.org>
Date: Thu, 10 Sep 2015 17:02:31 -0500 (CDT)
From: Christoph Lameter <cl@...ux.com>
To: Tejun Heo <tj@...nel.org>
cc: Tang Chen <tangchen@...fujitsu.com>, jiang.liu@...ux.intel.com,
mika.j.penttila@...il.com, mingo@...hat.com,
akpm@...ux-foundation.org, rjw@...ysocki.net, hpa@...or.com,
yasu.isimatu@...il.com, isimatu.yasuaki@...fujitsu.com,
kamezawa.hiroyu@...fujitsu.com, izumi.taku@...fujitsu.com,
gongzhaogang@...pur.com, qiaonuohan@...fujitsu.com, x86@...nel.org,
linux-acpi@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Gu Zheng <guz.fnst@...fujitsu.com>
Subject: Re: [PATCH v2 3/7] x86, gfp: Cache best near node for memory
allocation.
On Thu, 10 Sep 2015, Tejun Heo wrote:
> > Why not just update node_data[]->node_zonelist in the first place?
> > Also, what's the synchronization rule here? How are allocators
> > synchronized against node hot [un]plugs?
>
> Also, shouldn't kmalloc_node() or any public allocator fall back
> automatically to a near node w/o GFP_THISNODE? Why is this failing at
> all? I get that cpu id -> node id mapping changing messes up the
> locality but allocations shouldn't fail, right?
Without a node specification allocations are subject to various
constraints and memory policies. It is not simply going to the next node.
The memory load may require spreading out the allocations over multiple
nodes, the app may have specified which nodes are to be used etc etc.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists