lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Sat, 6 Feb 2010 14:31:07 -0800 (PST)
From:	David Rientjes <rientjes@...gle.com>
To:	Andi Kleen <andi@...stfloor.org>
cc:	submit@...stfloor.org, linux-kernel@...r.kernel.org,
	haicheng.li@...el.com, Pekka Enberg <penberg@...helsinki.fi>,
	linux-mm@...ck.org
Subject: Re: [PATCH] [1/4] SLAB: Handle node-not-up case in
 fallback_alloc()

On Sat, 6 Feb 2010, Andi Kleen wrote:

> > If a hot-added node has not been initialized for the cache, your code is 
> > picking an existing one in zonelist order which may be excluded by 
> > current's cpuset.  Thus, your code has a very real chance of having 
> > kmem_getpages() return NULL because get_page_from_freelist() will reject 
> > non-atomic ALLOC_CPUSET allocations for prohibited nodes.  That isn't a 
> > scenario that requires a "funny cpuset," it just has to not allow whatever 
> > initialized node comes first in the zonelist.
> 
> The point was that you would need to run whoever triggers the memory
> hotadd in a cpuset with limitations. That would be a clear
> don't do that if hurts(tm)
>  

With a subset of memory nodes, yes.  What else prohibits that except for 
your new code?  

There's a second issue with this approach that I eluded to above: you're 
picking the first initialized node for the cache based solely on whether 
it is allocated or not.  kmem_getpages() may still return NULL when it 
would return new slab for any other initialized node, so you're better off 
trying them all.

In other words, my earlier (untested) suggestion:

diff --git a/mm/slab.c b/mm/slab.c
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3172,6 +3172,7 @@ static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags)
 	gfp_t local_flags;
 	struct zoneref *z;
 	struct zone *zone;
+	nodemask_t allowed_nodes = NODE_MASK_NONE;
 	enum zone_type high_zoneidx = gfp_zone(flags);
 	void *obj = NULL;
 	int nid;
@@ -3197,6 +3198,7 @@ retry:
 					flags | GFP_THISNODE, nid);
 				if (obj)
 					break;
+				node_set(nid, allowed_nodes);
 		}
 	}
 
@@ -3210,7 +3212,15 @@ retry:
 		if (local_flags & __GFP_WAIT)
 			local_irq_enable();
 		kmem_flagcheck(cache, flags);
-		obj = kmem_getpages(cache, local_flags, numa_node_id());
+		nid = numa_node_id();
+		if (cache->nodelists[nid])
+			obj = kmem_getpages(cache, local_flags, nid);
+		else
+			for_each_node_mask(nid, allowed_nodes) {
+				obj = kmem_getpages(cache, local_flags, nid);
+				if (obj)
+					break;
+			}
 		if (local_flags & __GFP_WAIT)
 			local_irq_disable();
 		if (obj) {

Anyway, I'll leave these otherwise unnecessary limitations to Pekka.  
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ