[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1507241251460.5215@chino.kir.corp.google.com>
Date: Fri, 24 Jul 2015 12:54:32 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Christoph Lameter <cl@...ux.com>
cc: Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>, Greg Thelen <gthelen@...gle.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
Pekka Enberg <penberg@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>
Subject: Re: [RFC v2 4/4] mm: fallback for offline nodes in
alloc_pages_node
On Fri, 24 Jul 2015, Christoph Lameter wrote:
> On Fri, 24 Jul 2015, Vlastimil Babka wrote:
>
> > diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> > index 531c72d..104a027 100644
> > --- a/include/linux/gfp.h
> > +++ b/include/linux/gfp.h
> > @@ -321,8 +321,12 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask,
> > unsigned int order)
> > {
> > /* Unknown node is current (or closest) node */
> > - if (nid == NUMA_NO_NODE)
> > + if (nid == NUMA_NO_NODE) {
> > nid = numa_mem_id();
> > + } else if (!node_online(nid)) {
> > + VM_WARN_ON(!node_online(nid));
> > + nid = numa_mem_id();
> > + }
>
> I would think you would only want this for debugging purposes. The
> overwhelming majority of hardware out there has no memory
> onlining/offlining capability after all and this adds the overhead to each
> call to alloc_pages_node.
>
> Make this dependo n CONFIG_VM_DEBUG or some such thing?
>
Yeah, the suggestion was for VM_WARN_ON() in the conditional, but the
placement has changed somewhat because of the new __alloc_pages_node(). I
think
else if (VM_WARN_ON(!node_online(nid)))
nid = numa_mem_id();
should be fine since it only triggers for CONFIG_DEBUG_VM.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists