lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 5 Feb 2010 13:06:56 -0800 (PST)
From:	David Rientjes <rientjes@...gle.com>
To:	Andi Kleen <andi@...stfloor.org>
cc:	submit@...stfloor.org, linux-kernel@...r.kernel.org,
	haicheng.li@...el.com, Pekka Enberg <penberg@...helsinki.fi>,
	linux-mm@...ck.org
Subject: Re: [PATCH] [1/4] SLAB: Handle node-not-up case in
 fallback_alloc()

On Wed, 3 Feb 2010, Andi Kleen wrote:

> When fallback_alloc() runs the node of the CPU might not be initialized yet.
> Handle this case by allocating in another node.
> 

That other node must be allowed by current's cpuset, otherwise 
kmem_getpages() will fail when get_page_from_freelist() iterates only over 
unallowed nodes.

> Signed-off-by: Andi Kleen <ak@...ux.intel.com>
> 
> ---
>  mm/slab.c |   19 ++++++++++++++++++-
>  1 file changed, 18 insertions(+), 1 deletion(-)
> 
> Index: linux-2.6.33-rc3-ak/mm/slab.c
> ===================================================================
> --- linux-2.6.33-rc3-ak.orig/mm/slab.c
> +++ linux-2.6.33-rc3-ak/mm/slab.c
> @@ -3210,7 +3210,24 @@ retry:
>  		if (local_flags & __GFP_WAIT)
>  			local_irq_enable();
>  		kmem_flagcheck(cache, flags);
> -		obj = kmem_getpages(cache, local_flags, numa_node_id());
> +
> +		/*
> +		 * Node not set up yet? Try one that the cache has been set up
> +		 * for.
> +		 */
> +		nid = numa_node_id();
> +		if (cache->nodelists[nid] == NULL) {
> +			for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
> +				nid = zone_to_nid(zone);
> +				if (cache->nodelists[nid])
> +					break;

If you set a bit in a nodemask_t everytime ____cache_alloc_node() fails in 
the previous for_each_zone_zonelist() iteration, you could just iterate 
that nodemask here without duplicating the zone_to_nid() and 
cache->nodelists[nid] != NULL check.

	nid = numa_node_id();
	if (!cache->nodelists[nid])
		for_each_node_mask(nid, allowed_nodes) {
			obj = kmem_getpages(cache, local_flags, nid);
			if (obj)
				break;
		}
	else
		obj = kmem_getpages(cache, local_flags, nid);

This way you can try all allowed nodes for memory instead of just one when 
cache->nodelists[numa_node_id()] == NULL.

> +			}
> +			if (!cache->nodelists[nid])
> +				return NULL;
> +		}
> +
> +
> +		obj = kmem_getpages(cache, local_flags, nid);
>  		if (local_flags & __GFP_WAIT)
>  			local_irq_disable();
>  		if (obj) {
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ