lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0903041011520.3850@qirst.com>
Date:	Wed, 4 Mar 2009 10:23:16 -0500 (EST)
From:	Christoph Lameter <cl@...ux-foundation.org>
To:	David Rientjes <rientjes@...gle.com>
cc:	Paul Menage <menage@...gle.com>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Randy Dunlap <randy.dunlap@...cle.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [patch 2/2] slub: enforce cpuset restrictions for cpu slabs

On Tue, 3 Mar 2009, David Rientjes wrote:

> > Presumably in most cases all cpusets would have slab_hardwall set to
> > the same value.
>
> Christoph, would a `slab_hardwall' cpuset setting address your concerns?

That would make the per object memory policies in SLUB configurable? If
you can do that without regression and its clean then it would be
acceptable.

Again if you want per object memory policies in SLUB then it needs to be
added consistently. You would also f.e. have to check for an MPOL_BIND
condition where you check for cpuset nodes and make sure that __slab_alloc
goes round robin on MPOL_INTERLEAVE etc etc. You end up with a similar
nightmare implementation of that stuff as in SLAB. And as far as I know
this still has its issues since f.e. the MPOL_INTERLEAVE for objects is
fuzzing with the MPOL_INTERLEAVE node for pages which may result in
strange sequences of placement of pages on nodes because there were
intermediate allocations from slabs etc etc.

Memory policies and cpusets were initially designed to deal with page
allocations not with allocations of small objects. If you read the numactl
manpage then it becomes quite clear that we are dealing with page chunks
(look at the --touch or --strict options etc).

The intend is to spread memory in page chunks over NUMA nodes. That is
satisfied if the page allocations of the slab allocator are controllable
by memory policies and cpuset. And yes the page allocations may only
roughly correlate to the tasks that are consuming objects from shared
pools.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ