lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 3 Mar 2009 11:47:32 -0500 (EST)
From:	Christoph Lameter <cl@...ux-foundation.org>
To:	David Rientjes <rientjes@...gle.com>
cc:	Pekka Enberg <penberg@...helsinki.fi>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Paul Menage <menage@...gle.com>,
	Randy Dunlap <randy.dunlap@...cle.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [patch 2/2] slub: enforce cpuset restrictions for cpu slabs

On Mon, 2 Mar 2009, David Rientjes wrote:

> Slab allocations should respect cpuset hardwall restrictions.  Otherwise,
> it is possible for tasks in a cpuset to fill slabs allocated on mems
> assigned to a disjoint cpuset.

Not sure that I understand this correctly. If multiple tasks are running
on the same processor that are part of disjoint cpusets and both taska are
performing slab allocations without specifying a node then one task could
allocate a page from the first cpuset, take one object from it and then
the second task on the same cpu could consume the rest from a nodeset that
it would otherwise not be allowed to access. On the other hand it is
likely that the second task will also allocate memory from its allowed
nodes that are then consumed by the first task. This is a tradeoff coming
with the pushing of the enforcement of memory policy / cpuset stuff out of
the slab allocator and relying for this on the page allocator.

> If an allocation is intended for a particular node that the task does not
> have access to because of its cpuset, an allowed partial slab is used
> instead of failing.

This would get us back to the slab allocator enforcing memory policies.

> -static inline int node_match(struct kmem_cache_cpu *c, int node)
> +static inline int node_match(struct kmem_cache_cpu *c, int node, gfp_t gfpflags)
>  {
>  #ifdef CONFIG_NUMA
>  	if (node != -1 && c->node != node)
>  		return 0;
>  #endif
> -	return 1;
> +	return cpuset_node_allowed_hardwall(c->node, gfpflags);
>  }

This is a hotpath function and doing an expensive function call here would
significantly impact performance.

It also will cause a reloading of the per cpu slab after each task switch
in the scenario discussed above.

The solution that SLAB has for this scenario is to simply not use the
fastpath for off node allocations. This means all allocations that are not
on the current node always are going through slow path allocations.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ