lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 24 Jul 2015 16:06:08 -0700 (PDT)
From:	David Rientjes <rientjes@...gle.com>
To:	Vlastimil Babka <vbabka@...e.cz>
cc:	Christoph Lameter <cl@...ux.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mgorman@...e.de>, Greg Thelen <gthelen@...gle.com>,
	"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
	Pekka Enberg <penberg@...nel.org>,
	Joonsoo Kim <iamjoonsoo.kim@....com>,
	Naoya Horiguchi <n-horiguchi@...jp.nec.com>
Subject: Re: [RFC v2 4/4] mm: fallback for offline nodes in
 alloc_pages_node

On Fri, 24 Jul 2015, Vlastimil Babka wrote:

> >>> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> >>> index 531c72d..104a027 100644
> >>> --- a/include/linux/gfp.h
> >>> +++ b/include/linux/gfp.h
> >>> @@ -321,8 +321,12 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask,
> >>>  						unsigned int order)
> >>>  {
> >>>  	/* Unknown node is current (or closest) node */
> >>> -	if (nid == NUMA_NO_NODE)
> >>> +	if (nid == NUMA_NO_NODE) {
> >>>  		nid = numa_mem_id();
> >>> +	} else if (!node_online(nid)) {
> >>> +		VM_WARN_ON(!node_online(nid));
> >>> +		nid = numa_mem_id();
> >>> +	}
> >>
> >> I would think you would only want this for debugging purposes. The
> >> overwhelming majority of hardware out there has no memory
> >> onlining/offlining capability after all and this adds the overhead to each
> >> call to alloc_pages_node.
> >>
> >> Make this dependo n CONFIG_VM_DEBUG or some such thing?
> >>
> > 
> > Yeah, the suggestion was for VM_WARN_ON() in the conditional, but the 
> > placement has changed somewhat because of the new __alloc_pages_node().  I 
> > think
> > 
> > 	else if (VM_WARN_ON(!node_online(nid)))
> > 		nid = numa_mem_id();
> > 
> > should be fine since it only triggers for CONFIG_DEBUG_VM.
> 
> Um, so on your original suggestion I thought that you assumed that the condition
> inside VM_WARN_ON is evaluated regardless of CONFIG_DEBUG_VM, it just will or
> will not generate a warning. Which is how BUG_ON works, but VM_WARN_ON (and
> VM_BUG_ON) doesn't. IIUC VM_WARN_ON() with !CONFIG_DEBUG_VM will always be false.

Right, that's what Christoph is also suggesting.  VM_WARN_ON without 
CONFIG_DEBUG_VM should permit the compiler to check the expression but not 
generate any code and we don't want to check node_online() here for every 
allocation, it's only a debugging measure.

> Because I didn't think you would suggest the "nid = numa_mem_id()" for
> !node_online(nid) fixup would happen only for CONFIG_DEBUG_VM kernels. But it
> seems that you do suggest that? I would understand if the fixup (correcting an
> offline node to some that's online) was done regardless of DEBUG_VM, and
> DEBUG_VM just switched between silent and noisy fixup. But having a debug option
> alter the outcome seems wrong?

Hmm, not sure why this is surprising, I don't expect people to deploy 
production kernels with CONFIG_DEBUG_VM enabled, it's far too expensive.  
I was expecting they would enable it for, well... debug :)

In that case, if nid is a valid node but offline, then the nid = 
numa_mem_id() fixup seems fine to allow the kernel to continue debugging.

When a node is offlined as a result of memory hotplug, the pgdat doesn't 
get freed so it can be onlined later.  Thus, alloc_pages_node() with an 
offline node and !CONFIG_DEBUG_VM may not panic.  If it does, this can 
probably be removed because we're covered.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ